The relationship between thought and language and, in particular, the issue of whether and how language influences thought is still a matter of fierce debate. Here we consider a discrimination task scenario to study language acquisition in which an agent receives linguistic input from an external teacher, in addition to sensory stimuli from the objects that exemplify the overlapping categories that make up the environment. Sensory and linguistic input signals are fused using the Neural Modelling Fields (NMF) categorization algorithm. We find that the agent with language is capable of differentiating object features that it could not distinguish without language. In this sense, the linguistic stimuli prompt the agent to redefine and refine the discrimination capacity of its sensory channels. (C) 2007 Elsevier Ltd. All rights reserved.
Desde o seu nascimento, a Ciência da Informação vem estudando métodos para o tratamento automático da informação. Esta pesquisa centrou-se na Recuperação de Informação, área que envolve a aplicação de métodos computacionais no tratamento e recuperação da informação, para avaliar em que medida a Ciência da Computação contribui para o avanço da Ciência da Informação. Inicialmente a Recuperação de Informação é contextualizada no corpo interdisciplinar da Ciência da Informação e são apresentados os elementos básicos do processo de recuperação de informação. Os modelos computacionais de recuperação de informação são analisados a partir da categorização em quantitativos e dinâmicos. Algumas técnicas de processamento da linguagem natural utilizadas na recuperação de informação são igualmente discutidas. No contexto atual da Web são apresentadas as técnicas de representação e recuperação da informação desde os mecanismos de busca até a Web Semântica. Conclui-se que, apesar da inquestionável importância dos métodos e técnicas computacionais no tratamento da informação, estas se configuram apenas como ferramentas auxiliares, pois utilizam uma conceituação de informação extremamente restrita em relação àquela utilizada pela Ciência da Informação; Since its birth...
To take further steps along the path toward true artificial intelligence, systems must be built that are capable of learning about the world around them through observation and explanation. These systems should be flexible and robust in the style of the human brain and little precompiled knowledge should be given initially. As a step toward achieving this lofty goal, this thesis presents the self-organizing event map (SOEM) architcture. The SOEM architecture seeks to provide a way in which computers can be taught, through simple observation of the world, about typical events in a way that organized according to events that are observed by the system. In this manner, the event map produces clusters of similar events and provides an implicit representation of the regularity within the event space to which the system has been exposed. As part of this thesis, a test system that makes use of self-organizing event map architecture has been developed in conjunction with the Genesis Project at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. This system receives input through a natural-language text interface and, through repreated training cycles, becomes capable of discerning between typical and exceptional events. Clusters of similar events develop within the map and these clusters act as an implicit is flexible and robust. The self-organizing event map...
by Thomas F. Knight, Jr.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1979.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING; Bibliography: leaves 49-50.
by Robert Cregar Berwick.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaves 116-120.
by Brian Cantwell Smith.; Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaves 756-761.
by Richard Alan Ross.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaves 64-66.
This thesis presents a new approach to root cause localization and fault diagnosis in the Internet based on a Common Architecture for Probabilistic Reasoning in the Internet (CAPRI) in which distributed, heterogeneous diagnostic agents efficiently conduct diagnostic tests and communicate observations, beliefs, and knowledge to probabilistically infer the cause of network failures. Unlike previous systems that can only diagnose a limited set of network component failures using a limited set of diagnostic tests, CAPRI provides a common, extensible architecture for distributed diagnosis that allows experts to improve the system by adding new diagnostic tests and new dependency knowledge. To support distributed diagnosis using new tests and knowledge, CAPRI must overcome several challenges including the extensible representation and communication of diagnostic information, the description of diagnostic agent capabilities, and efficient distributed inference. Furthermore, the architecture must scale to support diagnosis of a large number of failures using many diagnostic agents.; (cont.) To address these challenges, this thesis presents a probabilistic approach to diagnosis based on an extensible, distributed component ontology to support the definition of new classes of components and diagnostic tests; a service description language for describing new diagnostic capabilities in terms of their inputs and outputs; and a message processing procedure for dynamically incorporating new information from other agents...
If we are to understand human-level cognition, we must understand how the mind finds the patterns that underlie the incomplete, noisy, and ambiguous data from our senses and that allow us to generalize our experiences to new situations. A wide variety of commercial applications face similar issues: industries from health services to business intelligence to oil field exploration critically depend on their ability to find patterns in vast amounts of data and use those patterns to make accurate predictions. Probabilistic inference provides a unified, systematic framework for specifying and solving these problems. Recent work has demonstrated the great value of probabilistic models defined over complex, structured domains. However, our ability to imagine probabilistic models has far outstripped our ability to programmatically manipulate them and to effectively implement inference, limiting the complexity of the problems that we can solve in practice. This thesis presents BLAISE, a novel framework for composable probabilistic modeling and inference, designed to address these limitations. BLAISE has three components: * The BLAISE State-Density-Kernel (SDK) graphical modeling language that generalizes factor graphs by: (1) explicitly representing inference algorithms (and their locality) using a new type of graph node...
A computer program that can understand the meaning of written English must be tremendously complex. It would break the spirit of any programmer to try to code such a program by hand; the range of meaning we can express in natural language is far too broad, too nuanced, too filled with exception. So I present UNDERSTAND, a program you can teach by example. Learning by example is an engineering expedient: it is much easier for us to come up with specific examples of a concept than some sort of perfect Platonic model. UNDERSTAND uses a technique I call Lattice-Learning to generalize accurately from just a few examples: "Robins, bees and helicopters can fly, but cats, worms and boats cannot," is enough for UNDERSTAND to narrow in on our concept of flying things: birds, insects and aircraft. It takes only 8 positive and 4 negative examples to teach UNDERSTAND how to interpret sentences as complicated as "The cat ran from the yard because a dog appeared." UNDERSTAND is implemented in 2300 lines of Java.; by Michael Tully Klein, Jr.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.; Includes bibliographical references (p. 49).
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1994.; The process of adding to the common ground between conversational participants (called grounding) has previously been either oversimplified or studied in an off-line manner. This dissertation presents a computational theory, in which a protocol is presented which can be used to determine, for any given state of the conversation, whether material has been grounded or what it would take to ground the material. This protocol is related to the mental states of participating agents, showing the motivations for performing particular grounding acts and what their effects will be. We extend speech act theory to account for levels of action both above and below the sentence level, including the level of grounding acts described above. Traditional illocutionary acts are now seen to be multi-agent acts which must be grounded to have their usual effects. A conversational agent model is provided, showing how grounding fits in naturally with the other functions that an agent must perform in engaging in conversation. These ideas are implemented within the TRAINS conversation system. Also presented is a situation-theoretic model of plan execution relations, giving definitions of what it means for an action to begin...
The major problem addressed in this study is to automate the Course Enrollment Process in Computer Science Curriculum Office thus to reduce the time spent in this process and increase the reliability and efficiency of the current system. The approach taken was to do a requirement analysis design an automated system by using Yourdon's Structured Analysis Method and implement the design in C++ programming language. The resulting program that we wrote enables the curriculum staff to keep information about students, courses and tracks, and generate the enrollment list and send messages to the students. It also enables the students to prepare their matrices, sign up for courses, get course and track information and send messages to the curricular officer through the application. The curriculum staff has reviewed the program and thought that it could be used in the real-world environment. But, it was not put into service in the curriculum. A future study can install the program to the computers in the office and test how effective the program is.; NA; NA; Turkish Army author.
Approved for public release; distribution unlimited.; A programming system using a hypothetical computer is proposed for
use in teaching machine and assembly language programming courses.
Major components such as monitor, assembler, interpreter, grader and
diagnostics are described. The interpreter is programmed and documented
for use on an IBM 360/67. The interpreter can be used for teaching
machine language programming and can be incorporated into the proposed
programming system.; http://www.archive.org/details/proposedprogramm00aker; Lieutenant Commander, United States Navy
Academic institutions, federal agencies, publishers, editors, authors, and
librarians increasingly rely on citation analysis for making hiring, promotion,
tenure, funding, and/or reviewer and journal evaluation and selection
decisions. The Institute for Scientific Information's (ISI) citation databases
have been used for decades as a starting point and often as the only tools for
locating citations and/or conducting citation analyses. ISI databases (or Web
of Science), however, may no longer be adequate as the only or even the main
sources of citations because new databases and tools that allow citation
searching are now available. Whether these new databases and tools complement
or represent alternatives to Web of Science (WoS) is important to explore.
Using a group of 15 library and information science faculty members as a case
study, this paper examines the effects of using Scopus and Google Scholar (GS)
on the citation counts and rankings of scholars as measured by WoS. The paper
discusses the strengths and weaknesses of WoS, Scopus, and GS, their overlap
and uniqueness, quality and language of the citations, and the implications of
the findings for citation analysis. The project involved citation searching for
We describe a practical approach for visual exploration of research papers.
Specifically, we use the titles of papers from the DBLP database to create what
we call maps of computer science (MoCS). Words and phrases from the paper
titles are the cities in the map, and countries are created based on word and
phrase similarity, calculated using co-occurrence. With the help of heatmaps,
we can visualize the profile of a particular conference or journal over the
base map. Similarly, heatmap profiles can be made of individual researchers or
groups such as a department. The visualization system also makes it possible to
change the data used to generate the base map. For example, a specific journal
or conference can be used to generate the base map and then the heatmap
overlays can be used to show the evolution of research topics in the field over
the years. As before, individual researchers or research groups profiles can be
visualized using heatmap overlays but this time over the journal or conference
base map. Finally, research papers or abstracts easily generate visual
abstracts giving a visual representation of the distribution of topics in the
paper. We outline a modular and extensible system for term extraction using
natural language processing techniques...
The Unified Modeling Language (UML) is commonly used in introductory Computer
Science to teach basic object-oriented design. However, there appears to be a
lack of suitable software to support this task. Many of the available programs
that support UML focus on developing code and not on enhancing learning. Those
that were designed for educational use sometimes have poor interfaces or are
missing common and important features, such as multiple selection and
undo/redo. There is a need for software that is tailored to an instructional
environment and has all the useful and needed functionality for that specific
task. This is the purpose of minimUML. minimUML provides a minimum amount of
UML, just what is commonly used in beginning programming classes, while
providing a simple, usable interface. In particular, minimUML was designed to
support abstract design while supplying features for exploratory learning and
error avoidance. In addition, it allows for the annotation of diagrams, through
text or freeform drawings, so students can receive feedback on their work.
minimUML was developed with the goal of supporting ease of use, supporting
novice students, and a requirement of no prior-training for its use.; Comment: 38 pages, 15 figures
Functional programming languages are seen by many as instrumental to
effectively utilizing the computational power of multi-core platforms. As a
result, there is growing interest to introduce functional programming and
functional thinking as early as possible within the computer science
curriculum. Bricklayer is an API, written in SML, that provides a set of
abstractions for creating LEGO artifacts which can be viewed using LEGO Digital
Designer. The goal of Bricklayer is to create a problem space (i.e., a set of
LEGO artifacts) that is accessible and engaging to programmers (especially
novice programmers) while providing an authentic introduction to the functional
programming language SML.; Comment: In Proceedings TFPIE 2014, arXiv:1412.4738
In this dissertation, we present LaSCO, the Language for Security Constraints
on Objects, a new approach to expressing security policies using policy graphs
and present a method for enforcing policies so expressed. Other approaches for
stating security policies fall short of what is desirable with respect to
either policy clarity, executability, or the precision with which a policy may
be expressed. However, LaSCO is designed to have those three desirable
properties of a security policy language as well as: relevance for many
different systems, statement of policies at an appropriate level of detail,
user friendliness for both casual and expert users, and amenability to formal
reasoning. In LaSCO, the constraints of a policy are stated as directed graphs
annotated with expressions describing the situation under which the policy
applies and what the requirement is. LaSCO may be used for such diverse
applications as executing programs, file systems, operating systems,
distributed systems, and networks.
Formal operational semantics have been defined for LaSCO. An architecture for
implementing LaSCO on any system, is presented along with an implementation of
the system-independent portion in Perl. Using this, we have implemented LaSCO
for Java programs...
While application software does the real work, domain-specific languages
(DSLs) are tools to help produce it efficiently, and language design assistants
in turn are meta-tools to help produce DSLs quickly. DSLs are already in wide
use (HTML for web pages, Excel macros for spreadsheet applications, VHDL for
hardware design, ...), but many more will be needed for both new as well as
existing application domains. Language design assistants to help develop them
currently exist only in the basic form of language development systems. After a
quick look at domain-specific languages, and especially their relationship to
application libraries, we survey existing language development systems and give
an outline of future language design assistants.; Comment: To be presented at SSGRR 2000, L'Aquila, Italy
This study examines the use of Facebook Markup Language (FBML) to design an
e-learning model to facilitate teaching and learning in an academic setting.
The qualitative research study presents a case study on how, Facebook is used
to support collaborative activities in higher education. We used FBML to design
an e-learning model called processes for e-learning resources in the Specialist
Learning Resources Diploma (SLRD) program. Two groups drawn from the SLRD
program were used; First were the participants in the treatment group and
second in the control group. Statistical analysis in the form of a t-test was
used to compare the dependent variables between the two groups. The findings
show a difference in the mean score between the pre-test and the post-test for
the treatment group (achievement, the skill, trends). Our findings suggest that
the use of FBML can support collaborative knowledge creation and improved the
academic achievement of participatns. The findings are expected to provide
insights into promoting the use of Facebook in a learning management system
(LMS).; Comment: Mohammed Amasha, Salem Alkhalaf, "The Effect of using Facebook Markup
Language (FBML) for Designing an E-Learning Model in Higher Education".
International Journal of Research in Computer Science...