The Department of Computer Science & Engineering |
WILLIAM J. RAPAPORT'S RESEARCH INTERESTS
|
5 November 2002
Artificial Intelligence | Philosophy of Mind |
Cognitive Science | Philosophy of Language |
Computational Linguistics | Critical Thinking |
Knowledge Representation | Cognitive Development |
Logic |
For my views on what cognitive science is, see:
Rapaport, William J. (2000),
"Cognitive Science",
in Anthony Ralston, Edwin D. Reilly, &
David Hemmendinger
(eds.),
Encyclopedia
of Computer Science,
4th
edition
(New York:
Grove's Dictionaries):
227-233;
postscript
version;
PDF
version.
My research falls into two broad categories. The first has grown directly out of my M.S. thesis in Computer Science (at SUNY Buffalo, under the direction of Stuart C. Shapiro), which, in turn, grew directly out of my Ph.D. dissertation in Philosophy (at Indiana University, under the direction of Hector-Neri Castañeda) and my subsequent research in philosophy of language.
I began my research career with a study of the theory of the objects of thought, developed by Alexius Meinong, a turn-of-the-century Austrian philosopher and psychologist. My Ph.D. dissertation consisted of a reworking and formalization of Meinong's theory. As a philosopher (at SUNY Fredonia), my areas of specialization were philosophy of language, philosophy of mind, and philosophy of logic.
It was suggested to me that a knowledge of AI was as necessary for a philosopher of mind as a knowledge of, say, quantum mechanics was for a philosopher of physics. Accordingly, I began a study of computer science. My M.S. thesis (in Computer Science) consisted of an application of the theory of quasi-indexical reference (from philosophy of language) and of Meinong's theory of objects to a computational theory of natural-language understanding.
This resulted in two NSF grants ("Logical Foundations for Belief Representation" and "Cognitive and Computer Systems for Understanding Narrative").
For papers discussing this aspect of my research, see:
Thus, from the start, my research in AI has been a continuation and elaboration of research in philosophy (my own and that of others); this is why I consider my research area to be cognitive science.
Development of a computational cognitive agent.
Our research group's computational theory of cognition is implemented in the SNePS knowledge-representation, reasoning, and acting system. SNePS (the Semantic Network Processing System) is a fully intensional, propositional, knowledge-representation, reasoning, and acting system, developed by Stuart C. Shapiro and coworkers, which is being used in many sites around the world for studies in expert systems, knowledge representation, reasoning, natural-language understanding and generation, belief revision, planning and acting, and cognitive modeling.
Our long-term goal is the design and construction of a natural-language-using, computerized, cognitive agent, and carrying out the research in artificial intelligence, computational linguistics, and cognitive science necessary for that endeavor. The three-part focus of the group is on knowledge representation, reasoning, and natural-language understanding and generation. The group is widely known for its development of the SNePS knowledge representation/reasoning system, and Cassie, its computerized cognitive agent.
Summaries of our work can be found in:
Contextual vocabulary acquisition: From algorithm to curriculum
Since 7/2001, funded by an NSF ROLE pilot project and jointly with Michael W. Kibby (of the Department of Learning and Instruction, I have been investigating "contextual vocabulary acquisition" (CVA): active, deliberate acquisition of word meanings from text by reasoning from contextual cues, background knowledge, and hypotheses developed from prior encounters with the word, but without external sources of help such as dictionaries or people. The ultimate goal is not merely to improve vocabulary acquisition, but also to increase students' reading comprehension of science, technology, engineering, and mathematics (STEM) texts, thereby leading to increased learning, by using a "miniature" (but real) example of the scientific method, viz., CVA. The computational and educational strands of the research are fully integrated and jointly serve this ultimate goal. This project will
People know the meanings of more words than they are explicitly taught, so they must have learned most of them as a by-product of reading or listening. Some of this is the result of active processes of hypothesizing the meaning of unknown words from context. How do readers do this? Most published strategies are quite vague; one simply suggests to "look" and "guess". This vagueness stems from a lack of relevant research about how context operates. There is no generally accepted cognitive theory of CVA, nor is there an educational curriculum or set of strategies for teaching it. If we knew more about how context operates, had a better theory of CVA, and knew how to teach it, we could more effectively help students identify context cues and know better how to use them.
AI studies of CVA (including ours) have necessarily gone into much more detail on what underlies the unhelpful advice to "guess", since natural-language-processing systems must operate on unconstrained input text independently of humans and can't assume a "fixed complete lexicon". But they have largely been designed to improve practical natural-language-processing (NLP) systems. Few, if any, have been applied in an educational setting, and virtually all have been ignored in the reading- and vocabulary-education literature. AI algorithms for CVA can fill in the details that can turn "guessing" into "computing"; these can then be taught to students.
Thus, the importance of this project stems from the twin needs (a) for NLP systems that operate independently of human assistance and (b) to improve both the teaching of reading and students' reading ability (especially in STEM). Hence, this multidisciplinary proposal combines basic and applied research. Its theoretical significance comes from the development of an NLP system that does CVA. Its educational significance lies in whether the knowledge gained by developing this system can be applied to teaching CVA strategies to students so that they are able to use them successfully when they encounter hard words in their regular reading of STEM texts. The project is also distinctive in its proposed use of mutual feedback between the development of the computational theory and the educational curriculum, making this a true cognitive-science project.
The two-way flow of research results between the education and the AI teams will continue, the education team providing data for improving the definition algorithms, the AI team providing the algorithms to be converted into a curriculum.
The AI team will:
For more information and references, see the CVA website.
This research evolved from ...
This group applies the methods of linguistics to analyze the determining effects of the lexicon and of grammar on the organization of discourse; the methods of psychology to track the cognitive processing involved as a discourse progresses; the methods of computer science to model the properties of discourse structure as well as the representation and updating of the "story world" in an unfolding narrative; and the methods of the field of communicative disorders to ascertain the discourse characteristics of autistic or other communication-impaired individuals and what this reveals about the structure of standard discourse.
For a summary of this group's research, see:
Duchan, Judith Felson;
Bruder, Gail A.; &
Hewitt, Lynne E. (eds.) (1995),
Deixis in
Narrative: A Cognitive Science Perspective (Hillsdale, NJ:
Lawrence
Erlbaum Associates).
Most recently, I have been developing these theories into a book on the
nature of understanding.
What does it mean to understand language?
"Semantic" understanding is a correspondence
between two domains;
a cognitive agent understands one of
those domains in terms of the other.
But
how is that other domain
understood?
Recursively, in terms of yet another.
But, since recursion needs a base case, there must be a domain that is
not understood in terms of another. So, it must be understood in terms
of itself.
How? Syntactically!
Put briefly, bluntly, and a bit paradoxically,
semantic understanding is syntactic understanding.
Thus, any cognitive agent---including a computer---capable of syntax (symbol
manipulation) is
capable of understanding language.
The purpose of my book is to present arguments for this position,
and to investigate
its implications. Chapters discuss:
models
and semantic theories (with critical evaluations of work by Arturo
Rosenblueth and
Norbert Wiener,
Brian
Cantwell Smith,
and Marx W. Wartofsky)
the nature of "syntactic semantics" (including the relevance of
Antonio Damasio's cognitive neuroscientific theories),
conceptual-role semantics (with critical evaluations of work by
Jerry Fodor
and
Ernest
Lepore,
Gilbert
Harman,
David Lewis,
Barry Loewer,
William
G. Lycan,
Timothy C. Potts, and
Wilfrid Sellars),
the role of negotiation in
interpreting communicative acts (including evaluations of theories by
Jerome Bruner
and
Patrick Henry
Winston),
Hilary Putnam's
and Jerry Fodor's views of methodological solipsism,
implementation and its relationships with such metaphysical concepts as
individuation, instantiation, exemplification, reduction, and
supervenience (with a study of
Jaegwon Kim's theories),
John Searle's Chinese-Room Argument and its relevance to understanding
Helen Keller
(and vice versa), and
Herbert Terrace's theory of naming as a fundamental linguistic
ability unique to humans.
Throughout, reference is
made to our implemented
computational theory of cognition: a computerized cognitive
agent implemented in SNePS.
The draft of my book is available as:
Rapaport, William J.,
Understanding Understanding: Semantics,
Computation, and Cognition (in preparation); pre-printed as
Technical Report
96-26
(Buffalo:
SUNY Buffalo
Department of Computer
Science).
Published papers arising from this project include:
Philosophy: A Theory of Syntactic Semantics
The second broad area of my research has been to apply insights from computer science
and AI to problems in philosophy, in particular, on the question of
whether computers will be able to "think". Lately, these two projects
have begun to overlap: In order to complete the argument that computers
are, in principle, capable of understanding natural language, I have
investigated the way in which meaning (semantics) and syntax interrelate; this has turned out to require a philosophical analysis of my
other major research project: Thus, my AI and my philosophical research
have mutually supported and clarified each other.
His research interests are in cognitive science, knowledge
representation, and computational linguistics. He has published
articles in artificial intelligence, cognitive science, computational
linguistics, philosophy of mind, and philosophy of language; received
the
American Philosophical Quarterly Essay Prize (1982);
and is co-author of a text, Logic: A Computer Approach
(McGraw-Hill,
1985). He is (or has been) on the editorial boards of the
journals
Computational Linguistics,
Machine Translation,
and
Noûs,
and of the
Kluwer book series
Studies in Cognitive Systems.
He recently retired as Review Editor of the journal
Minds and Machines and was
elected first president of the
Society for Minds and Machines
(1991-1993).
He has supervised or is currently supervising 4 Ph.D. dissertations
and
and 22 master's degrees in computer science, and has served on, or been outside reader for,
33 Ph.D. committees in computer
science, linguistics, philosophy, psychology, and education, and 4 master's committees in
dental education, linguistics, and philosophy. He has received grants and fellowships
from
NSF,
NEH,
and
Research Foundation of SUNY,
for work on cognitive
and computer systems for understanding narrative text, the logical
foundations of belief representation, natural-language semantics,
and contextual vocabulary acquisition.
He has served on the
American Philosophical Association
Committee on Pre-College Instruction in Philosophy and
the APA
Committee on Computers and Philosophy.