Last Update: Sunday, 26 May 2024
Note: or material is highlighted |
Index:
Abstract: Paper folding can help in understanding some infinite sequences and in finding their limits. A simple physical model useful at all levels of ability is presented, and infinite sequences of interest to senior high-school students are explored.
Abstract:
I argue that
George Nakhnikian's analysis of the logic of
cogito propositions (roughly, Descartes's 'cogito' and
'sum') is incomplete. The incompleteness is rectified by showing
that disjunctions of cogito propositions with contingent,
non-cogito propositions satisfy conditions of incorrigibility,
self-certifyingness, and pragmatic consistency; hence, they belong to
the class of propositions with whose help a complete characterization of
cogito propositions is made possible.
In the first chapter, I present the data that are my starting point.
These are taken from problems concerning the form of negative
existential propositions, the truth of sentences with non-referring
subject terms, the nature of genuine identity, the uniform nature of
ordinary language with respect to fact and fiction, scientific language,
the phenomenon of intentionality, and Fregean problems of sense and
reference. By looking at the data and problems in their own contexts, I
hope to understand better what Meinong and others were after when they
put forth their theories. I also elucidate the nature of the problems
of natural-language semantics and psychological discourse that are my
goals, show why they are important, and discuss the inadequacies of
current theories about them. I conclude with a list of criteria that
any theory offered as a unifying solution to these problems will have to
meet.
In the second chapter, I undertake a careful examination of Meinong's
Theory of Objects, which I show adequate to the criteria of the previous
chapter. I begin with a relatively informal exegesis of his principal
themes and theses, pointing out some problems and tensions along the
way. I then turn to a more formal presentation of the theory, revised
in order to resolve some of its tensions and amended to take into
account the data of the previous chapter. I also take a brief look at
some of the historical precedents of Meinong's theory.
In the third chapter, I examine critically two other contemporary
theories that satisfy the main theses of "Meinongian" theories (and are
therefore adequate to the criteria of the first chapter). One, due to
Terence
Parsons,
is an explicit attempt to reconstruct Meinong's theory
and, thus, shares similar goals with the revised theory developed in the
previous chapter, but it makes little or no attempt to fit such a
theory to any external data. The other, due to
Hector-Neri Castañeda, is an original construction of a theory "made to
measure" for such data, thought not explicitly Meinongian.
In the final chapter, I take stock of the adequacies and inadequacies of
my revision of Meinong's theory, for which I make three claims: It is a
theory in the spirit of Meinong's (and, for historical purposes, accepts
uncritically some of his assumptions); it is more coherent than the
original, in that it can withstand the objections raised against
Meinong's own version; and it takes into account more of the data, and
in a more explicit fashion, than the original. I conclude with a
discussion of some of the weaknesses of my system and make some
suggestions about the direction of future research.
Abstract:
This essay re-examines Meinong's "Über Gegenstandstheorie" and
undertakes a clarification and revision of it that is faithful to
Meinong, overcomes the various objections to his theory, and is capable
of offering solutions to various problems in philosophy of mind and
philosophy
of language. I then turn to a discussion of a historically and
technically interesting Russell-style paradox (now known as "Clark's
Paradox") that arises in the modified
theory. I also examine the alternative Meinong-inspired theories of
Hector-Neri Castañeda and
Terence Parsons.
Abstract:
A fundamental assumption of
Alexius Meinong's
1904
Theory of Objects
is the act-content-object analysis of psychological experiences. I
suggest that Meinong's theory need not be based on this analysis,
but that an adverbial theory might suffice. I then defend the
adverbial alternative against an objection raised by
Roderick Chisholm, and conclude by presenting an
apparently more serious objection based on a paradox discovered by
Romane Clark.
Abstract:
A description of a double-credit, 2-semester course, "Effective Thinking
and Communicating", part of
SUNY Fredonia's General-Liberal
Education Program, taught by faculty from English, Philosophy,
Mathematics, and Education.
Abstract:
A syntactical phenomenon common to logics of commands, of questions, and
some deontic logics is investigated using techniques of algebraic
logic.
The phenomenon is simple to describe. In terms of questions,
the result of combining an indicative sentence (e.g., 'It is raining')
with an interrogative sentence (e.g., 'Should I go home?') in (say) a
conditional construction is an interrogative ('If it is raining, should
I go home?'). Similarly combining an indicative with a command sentence
(e.g., 'Go home!') results in a command sentence ('If it is raining, go
home!'). In the deontic logic proposed by
Hector-Neri Castañeda (in
The Structure of Morality
and in
Thinking and
Doing),
the result of thus combining an indicative
with a "practitive" is a practitive.
These syntactical facts are reminiscent of scalar multiplication in
vector spaces: The product of a scalar and a vector is a vector.
However, neither vector spaces nor modules are general enough to serve
as appropriate algebraic analogues of these logics.
Taking the sentential (i.e., nonquantificational, nonmodal) fragment
C of Castañeda's deontic logic as a paradigm, it is
proposed that the relevant algebraic structure is a "dominance algebra"
(DA), where <M, R, I, E> is a
dominance algebra (over R) iff (i) R is an abstract
algebra, (ii) M is not empty, (iii) non-empty I is a
subset of M^{M^n} (n \in \omega), and (iv)
non-empty E is a subset of M^{(B^n ×
M^m) \union (M^m × B^n)}
(m, n \in \omega). (Modules are special cases of
dominance algebras.) It is proved that the Lindenbaum algebra
corresponding to C is a "double Boolean DA" (DBDA) (viz., one in
which M and R are Boolean algebras), soundness and
completeness theorems for C are obtained, and a representation
theorem for DBDAs is proved.
The paper concludes with some
generalizations of DAs and some remarks on their relevance to Montague-style
grammars.
Abstract:
Natural languages differ from most formal languages in having a partial,
rather than a total, semantic interpretation function; e.g., some noun
phrases don't refer. The usual semantics for handling such noun phrases
(e.g.,
Russell,
Quine)
require syntactic reform. The alternative presented here is semantic
expansion, viz., enlarging the range of the interpretation function to
make it total. A specific ontology based on
Alexius Meinong's
Theory of Objects, which can serve as domain of
interpretation, is suggested, and related to the work of
Hector-Neri
Castañeda,
Gottlob Frege,
Jerrold J. Katz &
Jerry Fodor,
Terence
Parsons, and
Dana Scott.
Abstract:
Philosophy has been characterized (e.g., by
Benson Mates) as a field whose problems
are unsolvable. This has often been taken to mean that there can be no
progress in philosophy as there is in mathematics or science. The
nature of problems and solutions is considered, and it is argued that
solutions are always parts of theories, hence that acceptance of a
solution requires commitment to a theory (as suggested by
William Perry's scheme of cognitive development).
Progress can be had in philosophy in the same way as in mathematics and
science by knowing what commitments are needed for solutions. Similar
views of
Rescher
and Castañeda are discussed.
Abstract:
Alexius Meinong developed notions of "defective objects"
and "logical space" (in his
On Emotional Presentation) in order to account for
logical paradoxes
(the
Liar,
Russell's) and psychological paradoxes
(Mally's
"self-presentation" paradox). These notions presage work by Herzberger
and Kripke, but fail to do the job they were designed for. However, a
technique implicit in Meinong's investigation is more successful and can
be adapted to resolve a similar paradox discovered by
Romane Clark in a
revised version of Meinong's
Theory of Objects. One family of paradoxes remain, but
they are unavoidable and relatively harmless.
First paragraph:
Richard Routley's
Exploring Meinong's Jungle and Beyond is a lengthy work of wide
scope, its cast of characters ranging from
Abelard
to
Zeno.
The nominal star is
Meinong,
yet the real hero is
Reid.
Topically, Routley presents us with a virtual encyclopedia of
contemporary philosophy, containing original philosophical and logical
analyses, as well as a valuable historical critique of Meinong's work.
Abstract:
We discuss how a deductive question-answering system can represent the
beliefs or other cognitive states of users, of other (interacting)
systems, and of itself. In particular, we examine the representation
of first-person beliefs of others (e.g., the system's
representation of a user's belief that he himself is rich). Such
beliefs have as an essential component "quasi-indexical" pronouns
(e.g., `he himself'), and, hence, require for their analysis a method of
representing these pronominal constructions and performing valid
inferences with them. The theoretical justification for the approach to
be discussed is the representation of nested "de dicto" beliefs
(e.g., the system's belief that user-1 believes that system-2 believes
that user-2 is rich). We discuss a computer implementation of these
representations using
the
Semantic Network Processing
System (SNePS) and an augmented-transition-network
parser-generator with a question-answering capability.
(A longer version of this paper appears as
Rapaport 1986.)
Abstract:
A popularization of Rapaport 1982, discussing whether
progress can be made in solving three
classical philosophical problems (free will vs. determinism, skepticism,
and the Liar paradox). Discusses
Perry's
cognitive-developmental scheme.
Abstract:
At least one of the goals of philosophy education at all levels, but
perhaps especially in elementary and secondary schools, ought to be the
fostering of the students' development of analytical and critical
thinking skills. This might come about in courses in general
philosophy, in philosophy units that are parts of courses in other
subjects, in ethics courses, or in
courses explicitly devoted to
critical logic. In this brief note, I call attention to
William
Perry's theory of cognitive development that, while it is most
appropriate for college students, is also relevant to "pre-college"
students; discuss its implications for critical thinking programs; and
offer some suggestions for further reading for teachers concerned about
these implications.
Abstract:
This report shows how a deductive question-answering system
can represent the beliefs or other cognitive states of
users, of other (interacting) systems, and of itself. In
particular, it examines the representation of first-person
beliefs of others (e.g., the system's representation of a
user's belief that he himself is rich). Such beliefs have
as an essential component "quasi-indexical pronouns" (e.g.,
'he himself'), and, hence, require for their analysis a
method of representing these pronominal constructions and
performing valid inferences with them. The theoretical justification for the approach is the representation of nested
"de dicto" beliefs (e.g., the system's belief that user-1
believes that system-2 believes that user-2 is rich). A
computer implementation of these representations is provided, using the
Semantic Network Processing System (SNePS)
and an ATN parser-generator with a question-answering capability.
Summary:
Dennett (Brainstorms) claims that a necessary condition for being a person is being
the object of a intentional stance. Dibrell claims that
really having intentionality is also necessary. I claim that
Dibrell offers at best a weak argument for his position, that an
argument can be given for something like his position, but
that that position is consistent with Dennett's. Intentionality
is special:
In order for it to be possible for an entity to be treated as if it were
intentional, it must simulate, hence actually have, intentionality.
Abstract:
This paper surveys several approaches to semantic-network semantics that
have not previously been treated in the AI or computational-linguistics
literature, though there is a large philosophical literature
investigating them in some detail. In particular, propositional
semantic networks (exemplified by
SNePS) are discussed, it
is argued that only a fully intensional ("Meinongian") semantics is
appropriate for them, and several Meinongian systems are presented.
Abstract:
This text uses concepts from computer science to cover all the
traditional topics in an introductory deductive logic course. However,
in this
book, unlike
traditional uses of computers in education, we "explain" to the
computer what is to be done. It is, thus, the student who has the
active role in the learning process.
Abstract:
Terence
Parsons's informal theory of intentional objects, their properties, and
modes of predication does not adequately reflect ordinary ways of
speaking and thinking. Meinongian theories recognizing two modes of
predication are defended against Parsons's theory of two kinds of
properties. Against Parsons's theory of fictional objects, I argue that
no existing entities appear in works of fiction. A formal version of
Parsons's theory is presented, and a curious consequence about modes of
predication is indicated.
Abstract:
In
Searle's
Chinese-Room
Argument, he says that "only something having the
same causal powers as brains can have intentionality", but he does not
specify what these causal powers are, and this is the biggest gap in his
argument. In his book, Intentionality, he says that "mental
states are both caused by the operations of the brain and
realized in the structure of the brain". A careful analysis
of these two notions reveals (1) what the requisite causal powers are,
(2) what's wrong with his claim about mental states, and (2) what is
wrong with his overall argument.
(A longer version of this paper appears as Rapaport 1988a.)
Abstract:
This essay examines the role of non-existent objects
in "epistemological ontology"--the study of the
entities that make thinking possible. An earlier
revision of
Meinong's
Theory of Objects
is reviewed,
Meinong's notions of Quasisein and Aussersein are discussed, and
a theory of Meinongian objects as "combinatorially
possible" entities is presented.
Abstract:
Uses the theory of abstract data types to contradict
John
Searle's
Chinese-Room
Argument.
Abstract:
A critique of several recent objections to
John
Searle's
Chinese-Room
Argument
against the possibility of "strong AI"
is presented. The objections are found to miss the point,
and a stronger argument against Searle is presented, based
on a distinction between "syntactic" and "semantic"
understanding.
First paragraph:
In Fall 1983, I offered a junior/senior-level course in Philosophy of
Artificial Intelligence, in the Department of Philosophy at
SUNY Fredonia,
after returning there from a year's leave to study and do
research in
computer science and
artificial intelligence at
SUNY Buffalo.
Of the 30 students enrolled, most were computer-science
majors, about a third had no computer background, and only a handful
had studied any philosophy. This article describes that course,
provides material for use in such a course (including an
"Artificial IQ"
test), and offers a bibliography of relevant articles in the
AI, cognitive science, and philosophical literature.
Abstract:
We present a formal syntax and semantics for
SNePS considered as the
(modeled) mind of a cognitive agent. The semantics is based on a
Meinongian theory of the intensional objects of thought that is
appropriate for AI considered as "computational philosophy" or
"computational psychology".
Abstract:
Belief reports can be interpreted de re or de dicto;
we investigate the disambiguation of belief reports as they appear in
discourse and narrative. In earlier work,
representations for de
re and de dicto belief reports were presented, and the
distinction between de
re and de dicto belief reports was made solely on the basis
of their representations. This analysis is sufficient only when belief
reports are considered in isolation. We need to consider more
complicated belief structures, in addition to those presented earlier,
in order to sufficiently represent de
re and de dicto belief reports as they appear in discourse
and narrative. Further, we cannot meaningfully apply one, but not the
other, of the concepts de
re and de dicto to these more complicated belief structures.
We argue that the concepts de
re and de dicto do not apply to an agent's conceptual
representation of her beliefs, but that they apply to the utterance of a
belief report on a specific occasion. A cognitive agent interprets a
belief report such as "S believes that N is F", or "S
said, `N is F'"
(where S and N
are names or descriptions, and F is an adjective) de
dicto if she interprets it from N's perspective, and she interprets
it de re if she interprets it from her own perspective.
Abstract:
This essay presents a philosophical and computational theory of the
representation
of de
re, de dicto, nested, and quasi-indexical belief reports
expressed in natural language. The propositional
Semantic Network
Processing System (SNePS) is used for representing and reasoning about
these reports. In particular, quasi-indicators (indexical expressions
occurring in intentional contexts and representing uses of indicators
by another speaker) pose problems for natural-language representation
and reasoning systems, because--unlike pure indicators--they cannot be
replaced by coreferential NPs without changing the meaning of the
embedding sentence. Therefore, the referent of the quasi-indicator must
be represented in such a way that no invalid coreferential claims are
entailed. The importance of quasi-indicators is discussed, and it is
shown that all four of the above categories of belief reports can be
handled by a single representational technique using belief spaces
containing intensional entities. Inference rules and belief-revision
techniques for the system are also examined.
(A shorter version of this paper appeared as
Rapaport & Shapiro 1984.
Both are based on my SUNY Buffalo M.S. thesis.)
Abstract:
This article contains reports from the various research groups in the
SUNY Buffalo Department of Computer Science, Vision Group, and Graduate
Group in Cognitive Science. It is organized by the different research
topics. However, it should be noted that the individual projects might
also be organized around the methodologies and tools used in the
research, and, of course, many of the projects fall under more than one
category.
Abstract:
This paper presents a computational analysis of
de re,
de dicto,
and
de se
belief and knowledge reports. The analysis solves a problem
first observed by
Castañeda, namely,
that the simple rule that (A knows that P) implies P
does not
hold if P contains a quasi-indicator. A
single rule is presented, in the context of an AI representation and
reasoning system, that holds for all propositions P, including
quasi-indexical ones. In so doing,
the importance of
representing proper names explicitly is demonstrated, and
support is provided for the necessity of considering sentences in the
context
of extended text (e.g., discourse or narrative)
in order to fully capture certain features of
their semantics.
Abstract:
This research program consists of a group of projects whose goals are to
develop a psychologically real model of a cognitive agent's
comprehension of deictic information in narrative text. We will test
the hypothesis that the construction and modification of a "deictic
center"--the locus in conceptual space-time of the characters, objects,
and events depicted by the sentences currently being perceived--is
important for comprehension. To test this hypothesis, we plan to develop
a computer system that will "read" a narrative and answer questions
concerning the agent's beliefs about the objects, relations, and events
in the narrative. The final system will be psychologically real,
because the details of the algorithms and the efficacy of the linguistic
devices will be validated by psychological experiments on normal and
abnormal comprehenders. This project will lead to a better
understanding of how people comprehend narrative text, it will advance
the state of machine understanding, and it will provide insight into the
nature of comprehension disorders and their potential remediation.
Abstract:
The purpose of this essay is to exhibit in detail the setting for the
version of the Cogito Argument that appears in Descartes's
Mediations. I believe that a close rading of the text can shed
new light on the nature and role of the "evil demon", on the nature of
God as he appears in the first few Meditations, and on the place of the
Cogito Argument in Descartes's overall scheme.
Abstract:
We are undertaking the design and implementation of a computer system
that can parse English sentences containing terms that the system does
not "know" (i.e., that are not in the system's lexicon), build a
semantic-network representation of these sentences, and express its
understanding
of the newly acquired terms by generating English sentences from the
resulting semantic-network database. The system will be a modification
of the natural-language processing capabilities of the
SNePS Semantic Network
Processing System. It is intended to test the thesis that
symbol-manipulating systems (computers) can "understand" natural
language.
Description:
A belief system may be understood as a set of beliefs together with a
set of implicit or explicit procedures for acquiring new beliefs.
Topics covered in this survey article: Reasons for Studying Such
Systems, Types of Theories, Philosophical Background, Surveys of
Theories and Systems.
(Revised as Rapaport 1992a.)
Description:
Topics covered in this brief survey article: The Nature of Logic,
Systems of Logic, Logic and Artificial Intelligence, Guide to Logic
Articles in this Encyclopedia.
(Revised as Rapaport 1992b.)
Description:
Topics covered in this introductory article: The Language of Predicate Logic,
Deductive Systems of Predicate Logic, Extensions of Predicate Logic,
Metatheoretic Results.
(Revised as Rapaport 1992c.)
Description:
Topics covered in this introductory article: Language of Propositional Logic,
Deductive Systems of Propositional Logic.
(Revised as Rapaport 1992d.)
Abstract:
SNePS, the Semantic
Network Processing System, is a semantic-network language with
facilities for building semantic networks to represent virtually any
kind of information, retrieving information from them, and performing
inference with them. Users can interact with SNePS in a variety of
interface languages, including a Lisp-like user language, a menu-based
screen-oriented editor, a graphics-oriented editor, a higher-order-logic
language, and an extendible fragment of English. This article discusses
the syntax and semantics for SNePS considered as an intensional
knowledge representation system, and provides examples of uses of SNePS
for cognitive modeling, database management, pattern recognition, expert
systems, belief revision, and computational linguistics.
Abstract:
A review of
Gareth Matthews's
Philosophy
and the Young Child
(Cambridge, MA:
Harvard University Press, 1980), and
Dialogues with Children (Cambridge, MA:
Harvard University Press, 1984),
with a discussion of
William
Perry's theory of cognitive development.
Abstract:
We give an overview of
natural-language understanding
and machine translation of natural languages using the
SNePS semantic
network processing system and
examine the use of Sanskrit grammarians' analyses as a
knowledge-representation technique.
Abstract:
This article discusses intensional knowledge representation and
reasoning as a foundation for modeling, understanding, and expressing
the cognitive attitudes of intelligent agents. In particular, we are
investigating both "representational" and "pragmatic" issues: The
representational
issues include (1) the design of representations rich enough to
support the interpretation and generation of referring expressions in
opaque (i.e., intensional) contexts, to be accomplished by means of
structured individuals and the notion of "belief spaces", and (2) the
design of representations rich enough to support the use of intentions
and practitions for representing and reasoning about action. The
pragmatic issues include the recognition of a speaker's intentions (for
interpreting referring expressions in opaque contexts) and the generation
of referring expressions in opaque contexts based on the intentions of
the cognitive agent. This pragmatic part of the overall project uses
the results obtained from our representational work on intentions and
practitions. The research is of significance for natural-language
processing and computational models of cognition and action.
Abstract:
Narrative passages told from a character's perspective convey the
character's thoughts and perceptions. We present a discourse process
that recognizes characters' thoughts and perceptions in third-person
narrative. An effect of perspective on reference in narrative is
addressed: References in passages told from the perspective of a character
reflect the character's beliefs. An algorithm that uses the results
of our discourse process to understand references with respect to an
appropriate set of beliefs is presented.
Abstract:
AI systems typically hand-craft large amounts of knowledge in complex,
static, high-level knowledge structures that generally work well in
limited domains but that are too rigid to support natural-language
understanding because, in part, AI natural-language-processing systems
have not taken seriously Rosch's principles of categorization. Thus,
such systems have very shallow representations of generic concepts and
categories. We discuss the inadequacy of systems based on
passive data structures with slots and explicit default values (frames,
schemata, scripts), arguing that they lack the flexibility, generality,
and adaptability necessary for representing generic concepts in memory.
We present alternative "active" representations that are constructed as
needed from a less organized semantic memory, whose construction can be
influenced by the current task and context. Our implementation uses
SNePS and a generalized
ATN parser-generator.
Abstract:
This paper is concerned with heuristics for segmenting narratives into
units that form the basic elements of discourse representations and that
constrain the application of focusing algorithms. The following classes
of discontinuities are identified: figure-ground, space, time,
perspective, and topic. It is suggested that rhetorical relations
between narrative units are macro labels that stand for frequently
occurring clusters of discontinuities. Heuristics for identifying
discontinuities are presented and illustrated in an extended example.
Abstract:
A critical study of
John Searle's
Minds, Brains and Science
(Cambridge, MA:
Harvard University Press, 1984).
Parts of this critical study appeared as Rapaport
1985d.
Abstract:
This essay considers what it means to understand natural language and
whether a computer running an artificial-intelligence program designed
to understand natural language does in fact do so. It is argued that a
certain kind of semantics is needed to understand natural language,
that this kind of semantics is mere symbol manipulation (i.e., syntax),
and that, hence, it is available to AI systems. Recent
arguments by
Searle
and
Dretske
to the effect that computers
cannot understand natural language are discussed, and a
prototype natural-language-understanding system is presented as
an illustration.
Abstract:
Stuart C. Shapiro
challenged the members of SNeRG to "come up with a defensible
representation for the information in" a paragraph due to
Beverly Woolf,
describing an
experiment in naive physics. This note reports our reply to the
challenge.
Abstract:
This report consists of three papers: "Quantifier Order, Reflexive
Pronouns, and Quasi-Indexicals", by Lawrence D. Roberts, "Reflections
on Reflexives and Quasi-Indexicals" (comments on
Roberts's paper), by William J. Rapaport, and Roberts's reply. They
were
originally presented at the Colloquium on Philosophy of Language at the
American Philosophical Association
Eastern Division meeting in New York
City, December 1987.
Roberts's thesis is that reflexive pronouns do not merely affect
reference, but also form intermediate propositional functions or verb
phrases that are reflexive. This thesis is defended, first, on the
basis of its results for 'only'-statements and for sets collected by
reflexive propositional functions and, second, on the basis of its
economy in assimilating the middle voice to reflexive propositional
functions,
and in providing parallel accounts of active, passive, and middle
voices. The third support of the thesis is its usefulness in replying
to a counterexample by
Robert M. Adams
to Hector-Neri Castañeda's
doctrine of quasi-indexical 'he'.
Rapaport's comments argue that Roberts's observations about reflexives
depend on an unwarranted assumption concerning the relation between
predicate
logic and English sentences, and it provides an alternative solution to
the puzzle about quasi-indexicals, based on their computational
interpretation in the
SNePS
knowledge-representation and reasoning system.
Incorporated into
Shapiro & Rapaport 1991, below.
Abstract:
There are many situations where linguistic and pictorial data are
jointly presented to communicate information. A computer model for
synthesizing information from the two sources requires an initial
interpretation of both the text and the picture, followed by
consolidation of information. The problem of performing
general-purpose vision (without apriori knowledge) would make this a
nearly impossible task. However, in some situations, the text describes
salient aspects of the picture. In such situations, it is possible to
extract visual information from the text, resulting in a relational
graph describing the structure of the accompanying picture. This graph
can then be used by a computer vision system to guide the interpretation
of the picture. This paper discusses an application whereby information
obtained from parsing a caption of a newspaper photograph is used to
identify human faces in the photograph. Heuristics are described for
extracting information from the caption that contributes to the
hypothesized structure of the picture. The top-down processing of the
image using this information is discussed.
Abstract:
This paper discusses the theoretical background and
the preliminary results of an interdisciplinary,
cognitive-science
research project on the comprehension of narrative
text. The
unifying theme of our work has been the notion of a deictic center: a
mental model of spatial, temporal, and character information contributed
by the reader of the narrative and used by the reader in understanding
the narrative. We examine the deictic center in the light of our
investigations from the viewpoints of linguistics, cognitive psychology,
individual differences (language pathology), literary theory of
narrative,
and artificial intelligence.
Abstract:
This project continues our interdisciplinary research into
computational and cognitive aspects of narrative comprehension. Our
ultimate goal is the development of a computational theory of how
humans understand narrative texts. The theory will be informed by
joint research from the viewpoints of linguistics, cognitive
psychology, the study of language acquisition, literary theory,
geography, philosophy, and artificial intelligence. The linguists,
literary theorists, and geographers in our group are developing
theories of narrative language and spatial understanding that are being
tested by the cognitive psychologists and language researchers in our
group, and a computational model of a reader of narrative text is being
developed by the AI researchers, based in part on these theories and
results and in part on research on knowledge representation and
reasoning. This proposal describes the knowledge-representation and
natural-language-processing issues involved in the computational
implementation of the theory; discusses a contrast between communicative
and narrative uses of language and of the relation of the narrative
text to the story world it describes; investigates linguistic, literary,
and hermeneutic dimensions of our research; presents a computational
investigation of subjective sentences and reference in narrative;
studies
children's acquisition of the ability to take third-person perspective
in their own storytelling; describes the psychological validation of
various linguistic devices; and examines how readers develop an
understanding of the geographical space of a story. This report is a
longer
version of a project description submitted to NSF.
Abstract:
Representations for natural category systems and a retrieval-based
framework are presented that provide the means for applying generic
knowledge about the semantic relationships between entities in discourse
and the relative salience of these entities imposed by the current
context. An analysis of the use of basic- and superordinate-level
categories in discourse is presented, and the use of our representations
and processing in the task of discourse comprehension is demonstrated.
Abstract:
There are many situations where linguistic and pictorial data are jointly
presented to communicate information. A computer model for synthesizing
information from the two sources requires an initial interpretation of
both the text and the picture, followed by consolidation of information.
The problem of performing general-purpose vision (without apriori
knowledge) would make this a nearly impossible task. However, in some
situations, the text describes salient aspects of the picture. In such
situations, it is possible to extract visual information from the text,
resulting in a relational graph describing the structure of the
accompanying picture. This graph can then be used by a computer vision
system to guide the interpretation of the picture. This paper
discusses an application whereby information obtained from parsing a
caption of a newspaper photograph is used to identify human faces in the
photograph. Heuristics are described for extracting information from the
caption, which contributes to the hypothesized structure of the picture.
The top-down processing of the image using this information is discussed.
Abstract:
This paper discusses issues in the representation of fictional entities
and the representation of propositions from fiction, using
SNePS. It
briefly surveys four philosophical ontological theories of fiction and
sketches an epistemological theory of fiction (to be implemented in
SNePS) using a story operator and rules for allowing propositions to
"migrate" into and out of story "spaces".
Abstract:
This is a draft of the written version of comments on a paper by
David Cole,
presented orally at the
American Philosophical Association
Central Division
meeting in New Orleans, 27 April 1990. Following the written comments
are
2 appendices: One contains a letter to Cole updating these comments.
The other is the handout from the oral presentation.
In general, I am sympathetic to Cole's arguments; my comments seek to
clarify and extend the issues. Specifically, I argue that, in
Searle's
celebrated
Chinese-Room
Argument,
Searle-in-the-room does understand
Chinese, in spite of his claims to the contrary. He does this in the
sense that he is executing a computer "process" that can be said to
understand Chinese. (The argument that the process in fact does
understand Chinese is made elsewhere; here, I merely assume that
if anything understands Chinese, it is a "process" executed by
Searle-in-the-room.) I also show, by drawing an analogy
between the way that I add numbers in my head and the way that a
calculator adds numbers, that Searle-in-the-room's claim that he does
not understand Chinese does not contradict the fact that, by executing
the
Chinese-natural-language-understanding algorithm, he does understand
Chinese.
Abstract:
This paper describes the
SNePS
knowledge-representation and reasoning
system.
SNePS is an intensional, propositional, semantic-network
processing system used for research in AI.
We look at how predication is represented in such a
system when it is used for cognitive modeling and natural-language
understanding and generation. In particular, we discuss
issues in the representation of
fictional entities
and the representation of propositions from fiction, using SNePS.
We briefly survey four philosophical ontological theories of fiction and
sketch an epistemological theory of fiction (implemented in
SNePS)
using a story operator and rules for allowing propositions to
"migrate"
into and out of story "spaces".
(First half revised and expanded
as Shapiro & Rapaport 1995, below.)
Description:
A brief introduction to
Meinong,
his theory of objects, and modern
interpretations of it. Sections
include: The Theory of Objects, Castañeda's Theory
of Guises,
Parsons,'s
Theory of Nonexistent Objects,
Rapaport's Theory of Meinongian Objects,
Routley's Theory of Items.
Abstract:
Cognitive agents, whether human or computer, that engage in
natural-language discourse and that have beliefs about the beliefs of other
cognitive agents must be able to represent objects the way they believe
them to be and the way they believe others believe them to be. They
must be able to represent other cognitive agents both as objects of
beliefs and as agents of beliefs. They must be able to represent their
own beliefs, and they must be able to represent beliefs as objects of
beliefs. These requirements raise questions about the number of tokens
of the belief representation language needed to represent believers and
propositions in their normal roles and in their roles as objects of
beliefs. In this paper, we explicate the relations among nodes, mental
tokens, concepts, actual objects, concepts in the belief spaces of an
agent and the agent's model of other agents, concepts of other cognitive
agents, and propositions. We extend, deepen, and clarify our theory of
intensional knowledge representation for natural-language processing, as
presented in previous papers and in light of objections raised by others.
The essential claim is that tokens in a knowledge-representation
system represent only intensions and not extensions. We are pursuing
this investigation by building CASSIE, a computer model of a cognitive
agent and, to the extent she works, a cognitive agent herself. CASSIE's
mind is implemented in the
SNePS
knowledge-representation and reasoning
system.
Contents:
Philosophical essays by
Ray
Jackendoff,
Donald Perlis,
Janyce M. Wiebe,
Philip R. Cohen &
Hector
J. Levesque,
Martha E. Pollack,
and
João
P. Martins &
Maria
R. Cravo.
Abstract:
It is well known that people from other disciplines have made
significant
contributions to philosophy and have influenced philosophers. It is
also
true (though perhaps not often realized, since philosophers are not on
the receiving end, so to speak) that philosophers have made
significant contributions to other disciplines and have influenced
researchers in these other disciplines, sometimes more so than they have
influenced philosophy itself. But what is perhaps not as well known as
it ought to be is that researchers in other disciplines, writing in
those
other disciplines' journals and conference proceedings, are doing
philosophically sophisticated work, work that we in philosophy ignore at
our peril.
Work in cognitive science and artificial intelligence (AI) often
overlaps such paradigmatic philosophical specialties as logic,
the philosophy of mind, the philosophy of language,
and the philosophy of action.
This special issue
offers
a sampling of research in cognitive
science
and AI that is philosophically relevant and philosophically
sophisticated.
Description:
Revision of Rapaport 1987a, above.
Description:
Revision of Rapaport 1987b, above.
Description:
Revision of Rapaport 1987c above.
Description:
Revision of Rapaport 1987d above.
Abstract:
SNePS,
the Semantic Network Processing System, is an intensional
propositional semantic network that has been designed to be the mind of
a computational cognitive agent. In this article, the main features of
SNePS are sketched, its antecedents are discussed, and some example
current uses are described.
Abstract:
Revised version of Shapiro & Rapaport 1986, above.
We present a formal syntax and semantics for the
SNePS Semantic
Network
Processing System, based on a
Meinongian theory of the intensional
objects of thought. Such a theory avoids possible worlds and is
appropriate for AI considered as "computational philosophy"--AI as the
study of how intelligence is possible--or "computational psychology"--AI
with the goal of writing programs as models of human cognitive
behavior. Recently, SNePS has been used for a variety of AI research
and application projects. These are described in
Shapiro & Rapaport 1987, of which the
present paper is a much shortened version. Here, we use SNePS to model
(or construct) the mind of a cognitive agent, referred to as Cassie
(the Cognitive Agent of the SNePS System--an
Intelligent Entity).
Abstract:
We are developing a computational theory of a cognitive agent's ability
to acquire word meanings from natural-language contexts, especially from
narrative. The meaning
of a word as understood by such an agent is taken to
beits relation to the meanings of other words in
a highly interconnected network representing the agent's knowledge.
However,
because such
knowledge is very idiosyncratic, we are researching the means by which
an agent can
abstract conventional definitions from its individual experiences
with a word.
We are investigating the nature of information necessary to the
production of such conventional definitions, and the processes of
revising hypothesized definitions in the light of
successive encounters with a word. The theory is being tested by
implementing it in a knowledge-representation and reasoning system with
facilities both for parsing and generating fragments of natural language
(English) and for
reasoning and belief revision.
Potential applications include education, computational lexicography,
and cognitive science studies of narrative understanding.
Description:
A survey article covering the following topics: Definition of
`Cognitive Science', History of Cognitive Science, Cognition and
Computation, Varieties of Cognitive Science, Cognitive Science Research,
and Future of Cognitive Science. (Revised as
Rapaport 2000a, below.)
Abstract:
I suggest that on a strong view
of thinking, mere calculating is not thinking (and pocket
calculators don't think), but on a weak, but unexciting, sense of
thinking, pocket calculators do think. I close with some observations
on
the implications of this conclusion.
Abstract:
We are developing a computational theory of cognitive agents' abilities
to expand their vocabulary from natural-language contexts. The meaning
of a word for an agent is its relation to the meanings of other words in
a semantic network representing the agent's knowledge. Since such
knowledge is idiosyncratic, we are researching how the agent can
abstract dictionary-like definitions from its individual experiences
with a word and revise hypothesized definitions in the light of
successive encounters with a word. The theory is being tested by
implementing it in a knowledge-representation and reasoning system with
facilities both for parsing and generating fragments of English and for
reasoning and belief revision.
Abstract:
Revised and expanded version of the first half of Rapaport 1991a, above.
We describe the
SNePS
knowledge-representation and
reasoning system. We look at how SNePS is used for cognitive modeling
and natural language competence. SNePS has proven particularly
useful in our investigations of narrative understanding.
Abstract:
Revised and expanded version of the second half of Rapaport 1991a, above.
We discuss
issues in the representation of
fictional entities
and the representation of propositions from fiction, using the
SNePS
propositional knowledge-representation and reasoning system.
We briefly survey four philosophical ontological theories of fiction and
sketch an epistemological theory of fiction
using a story operator and rules for allowing propositions to
"migrate"
into and out of story "spaces".
An implementation of the theory in SNePS is presented.
Abstract:
John
Searle once said: "The
Chinese room
shows what we knew all
along: syntax by itself is not sufficient for semantics. (Does anyone
actually deny this point, I mean straight out? Is anyone actually
willing to say, straight out, that they think that syntax, in the sense
of formal symbols, is really the same as semantic content, in the sense
of meanings, thought contents, understanding, etc.?)." I say: "Yes".
Stuart C. Shapiro
has said: "Does that make any sense? Yes: Everything
makes sense. The question is: What sense does it make?"
This essay explores what sense it makes to say that syntax by
itself is sufficient for semantics.
Abstract:
This project concerns the development and implementation
of a computational theory of how human readers and other
natural-language-understanding systems can automatically
expand their
vocabulary by determining the meaning of a
word from context. The word might be unknown to the reader, familiar
but misunderstood, or familiar but being
used in a new sense. 'Context' includes the prior and immediately
surrounding text, grammatical information, and the reader's background
knowledge, but no access to a dictionary or other external source
of information (including a human).
The fundamental thesis is that the meaning of such a word (1)
can be determined from context, (2) can be
revised and refined upon further encounters with the word,
(3) "converges" to a dictionary-like definition if enough
context has been provided and there have been enough exposures to the
word,
and (4)
eventually "settles down" to a "steady state", which,
however, is always subject to revision upon further encounters with the
word.
The system is being implemented in the
SNePS-2.1
knowledge-representation and reasoning system,
which
provides a software laboratory for testing and experimenting with the
theory.
This research is a component of an
interdisciplinary, cognitive-science project
to develop a computational cognitive model of a reader of
narrative text.
Abstract:
This document consists of two papers: "The Ontology of Cyberspace:
Preliminary Questions", by David R. Koepsell, and "Comments on
`The Ontology of Cyberspace'," by William J. Rapaport.
They were originally presented at the Tri-State Philosophical
Association Meeting, St. Bonaventure University, 22 April 1995.
Abstract:
This document consists of three papers: "Virtual Relations", by
Dale Jacquette;
a reply, "Virtual
Universals", by William J. Rapaport; and "A Note in Reply to
William J. Rapaport on Virtual Relations". They were originally presented
at the Marvin Farber Conference on the
Ontology and Epistemology of
Relations, SUNY Buffalo, 17 September 1994.
The purpose of this book is to present arguments for this position, and to
investigate its implications. Subsequent chapters discuss: models
and semantic theories (with critical evaluations of work by Arturo
Rosenblueth and
Norbert Wiener,
Brian Cantwell Smith,
and Marx W. Wartofsky); the nature of "syntactic semantics" (including the
relevance of
Antonio
Damasio's
cognitive neuroscientific theories);
conceptual-role semantics (with critical evaluations of work by
Jerry Fodor
and
Ernest Lepore,
Gilbert Harman,
David
Lewis,
Barry
Loewer,
William G. Lycan,
Timothy C. Potts, and
Wilfrid Sellars);
the role of
negotiation in interpreting communicative acts (including evaluations
of theories by
Jerome Bruner
and
Patrick Henry Winston);
Hilary Putnam's and
Jerry Fodor's views of methodological solipsism;
implementation and its relationships with such metaphysical concepts as
individuation, instantiation, exemplification, reduction, and
supervenience (with a study of
Jaegwon Kim's theories);
John Searle's
Chinese-Room
Argument and its relevance to understanding Helen Keller
(and vice versa); and
Herbert Terrace's theory of naming as a
fundamental linguistic ability unique to humans.
Throughout, reference
is made to an implemented computational theory of cognition: a
computerized cognitive agent implemented in the
SNePS
knowledge-representation and reasoning system. SNePS is: symbolic (or
"classical"; as opposed to connectionist), propositional (as opposed
to being a taxonomic or "inheritance" hierarchy), and fully
intensional (as opposed to (partly) extensional), with several
types of interrelated inference and belief-revision mechanisms, sensing
and effecting mechanisms, and the ability to make, reason about, and
execute plans.
Abstract:
As part of an interdisciplinary project to develop a computational
cognitive model of a reader of narrative text, we are developing a
computational theory of how natural-language-understanding systems can
automatically expand their vocabulary by determining from context the
meaning of words that are unknown, misunderstood, or used in a new
sense. `Context' includes surrounding text, grammatical information,
and background knowledge, but no external sources. Our thesis is that
the meaning of such a word can be determined from context, can
be
revised upon further encounters with the word, "converges"
to a
dictionary-like definition if enough context has been provided and
there have been enough exposures to the word, and eventually "settles
down" to a "steady state" that is always subject to revision upon
further encounters with the word. The system is being implemented in
the SNePS knowledge-representation and reasoning system.
(The online document is a slightly modified version (containing the
algorithms) of that which appears in the Proceedings.)
Abstract:
We present a computational analysis of
de re,
de dicto,
and
de se
belief and knowledge reports. Our analysis solves a problem
first observed by
Hector-Neri Castañeda, namely,
that the simple rule
`(A knows that P) implies P'
apparently
does not
hold if P contains a quasi-indexical. We present
a single rule, in the context of a knowledge-representation and
reasoning system, that holds for all P, including
those containing
quasi-indexicals. In so doing, we
explore the difference between reasoning in a public communication
language and in a knowledge-representation language, we
demonstrate the importance of
representing proper names explicitly, and we
provide
support for the necessity of considering sentences in the context
of extended discourse (for example, written narrative)
in order to fully capture certain features of
their semantics.
Abstract:
The late Hector-Neri Castañeda, the Mahlon
Powell Professor of
Philosophy at
Indiana
University, and founding editor of
Noûs,
has deeply influenced current analytic philosophy with
diverse contributions, including guise theory, the
theory of indicators and quasi-indicators, and the
proposition/practition theory. This volume
collects 15 papers--for the most part previously
unpublished--in ontology, philosophy of
language, cognitive science, and related areas by
ex-students of Professor Castañeda, most of
whom are now well-known researchers or even
distinguished scholars. The authors share the
conviction that Castañeda's work must continue to
be explored and that his philosophical
methodology must continue to be applied in an
effort to further illuminate all the issues that he so
deeply investigated. The topics covered by the
contributions include intensional contexts,
possible worlds, quasi-indicators, guise theory,
property theory, Russell's substitutional theory of
propositions, event theory, the adverbial theory of
mental attitudes, existentialist ontology, and
Plato's, Leibniz's, Kant's, and Peirce's ontologies.
An introduction by the editors relates all these
themes to Castañeda's philosophical interests and
methodology.
Abstract:
(See above.)
Abstract:
A survey of the direct and indirect influence of the philosophical theories
of Hector-Neri
Castañeda on AI research.
Abstract:
A list of Castañeda's Ph.D. students, their students (i.e.,
Castañeda's "grandstudents"), etc.
Abstract:
The proper treatment of computationalism,
as the thesis that cognition is computable, is presented and defended.
Some arguments
of
James H. Fetzer
against computationalism are examined
and found wanting, and his positive theory of minds as semiotic
systems is shown to be consistent with computationalism. An objection
is raised to an
argument of
Selmer Bringsjord
against one strand of computationalism,
viz., that Turing-Test-passing artifacts are persons; it
is argued that, whether or not this objection holds, such artifacts
will inevitably be persons.
Abstract:
What is the computational notion of "implementation"? It is not
individuation, instantiation, reduction, or supervenience. It is, I
suggest, semantic interpretation.
Abstract:
Abridged and slightly edited version of
Rapaport & Shapiro 1995, above.
postscript
version Abstract:
Revision of Rapaport 1993a, above.
Abstract:
As part of an
interdisciplinary project
to develop a computational cognitive model of a reader of
narrative text,
we are developing
a computational theory of how
natural-language-understanding systems can automatically
acquire new
vocabulary by determining
from context the meaning of words that are unknown,
misunderstood, or
used in a new sense. `Context' includes
surrounding text, grammatical information, and background
knowledge, but no external sources.
Our thesis is that the meaning of such a word
can be determined from context, can be
revised upon further encounters with the word,
"converges" to a dictionary-like definition if enough
context has been provided and there have been enough exposures to the
word,
and
eventually "settles down" to a "steady state" that
is always subject to revision upon further encounters with the
word.
The system is being implemented in the
SNePS knowledge-representation and reasoning system.
Abstract:
A theory of "syntactic semantics" is advocated as a way of
understanding how computers can think (and how the
Chinese-Room-Argument
objection to the Turing Test can be overcome): (1) Semantics, as the
study
of relations between symbols and meanings, can be turned into syntax--a
study of relations among symbols (including meanings)--and hence syntax
can suffice for the semantical enterprise. (2) Semantics, as the process
of understanding one domain modeled in terms of another, can
be viewed recursively: The base case of semantic
understanding--understanding a domain in terms of itself--is
syntactic understanding. (3) An internal (or "narrow"), first-person
point of view makes an external (or "wide"), third-person point of
view otiose for purposes of understanding cognition.
Abstract:
This essay continues my investigation of "syntactic semantics": the
theory that, pace Searle's
Chinese-Room
Argument, syntax does
suffice for semantics (in particular, for the semantics needed for a
computational cognitive theory of natural-language understanding).
Here, I argue that syntactic semantics (which is internal and
first-person) is what has been called a conceptual-role semantics: The
meaning of any expression is the role that it plays in the complete
system of expressions. Such a "narrow", conceptual-role semantics is
the appropriate sort of semantics to account (from an "internal", or
first-person perspective) for how a cognitive agent understands
language. Some have argued for the primacy of external, or "wide",
semantics, while others have argued for a two-factor analysis.
But, although two factors can be specified--one internal and
first-person, the other only specifiable in an external, third-person
way--only the internal, first-person one is needed for understanding
how someone understands. A truth-conditional semantics can still be
provided, but only from a third-person perspective.
Abstract:
We discuss a research project
that develops and applies algorithms for computational
contextual vocabulary acquisition (CVA): learning the meaning of
unknown
words from context.
We try to unify a disparate literature on the
topic of CVA from psychology, first- and second-language
acquisition, and reading science, in order to help develop these
algorithms: We use the knowledge gained from the computational
CVA system to build an educational
curriculum for enhancing students' abilities to use
CVA strategies in their reading of science texts at the
middle-school and college undergraduate levels.
The knowledge gained from case studies of students using
our CVA techniques feeds back into further development of our
computational theory.
Summary:
Arel argues that reward-driven AGIs "will inevitably pose a danger to
humanity". I question the inevitability on the grounds that the AGI's
ability to reason and use language will allow us to collaborate and
negotiate with it, as we do with other humans.
Abstract:
This essay describes computational semantic networks for a philosophical
audience and surveys several approaches to semantic-network
semantics. In particular, propositional semantic networks (exemplified
by
SNePS) are
discussed; it is argued that only a
fully intensional, Meinongian semantics is
appropriate for them; and
several Meinongian systems are presented.
(This essay was originally written a long time ago,
in March 1985. In the intervening decade, much progress has been made
that is not reflected in the essay. I have, however,
updated some of the references, and
the promissory notes with respect to an intensional
semantics for SNePS have since been cashed,
in part, in Shapiro & Rapaport 1987,
1991.)
[A shorter version appeared as Rapaport 1985a.]
Abstract:
Deliberate contextual vocabulary acquisition (CVA) is a reader's ability
to figure out a meaning for an unknown word from its "context" without
external sources of help. The appropriate context for such CVA is the
"belief-revised integration" of the reader's prior knowledge with the
reader's "internalization" of the text. We present and defend a
computational theory of CVA that we have adapted to a new classroom
curriculum designed to help students use CVA to improve their readig
comprehension.
Abstract:
A survey of various proposed definitions of ‘computer
science’, arguing that
it is a "portmanteau" scientific study of a family of
topics surrounding both
theoretical and practical computing. Its single most central question is:
What
can be computed (and how)? Four other questions follow logically from
that central one: What can be computed efficiently, and how? What can be
computed practically, and how? What can be computed physically, and how?
What should be computed, and how?
Abstract:
Let S, T, be non-empty sets. The syntax of S (or T) is the set of
properties of, and relations among, the
members of S (or T). The ontology of T (or S) is its syntax. The semantic
interpretation of S by T is a set
of relations between S and T. Semantics is the study of such relations
between S and T. Let U = S ∪ T.
Then the syntax of U provides the semantics of S in terms of T. Hence,
semantics is syntax.
Abstract:
I survey a common theme that pervades the philosophy of computer science
(and philosophy more generally): the relation of computing to the
world. Are algorithms merely certain procedures entirely
characterizable in an
"indigenous",
"internal',
"intrinsic",
"local",
"narrow",
"syntactic"
(more generally: "intra-system"),
purely-Turing-machine language?
Or must algorithms interact with the real world, having a purpose that is
expressible only in a language with an
"external",
"extrinsic",
"global",
"wide",
"inherited"
(more generally: "extra-" or "inter-"system)
semantics?
Abstract:
Computationalism should not be the view that (human) cognition is
computation; it
should be the view that cognition (simpliciter) is computable. It follows
that
computationalism can be true even if (human) cognition is not the result
of computations
in the brain. If semiotic systems are systems that interpret signs, then
both humans and
computers are semiotic systems. Finally, minds can be considered as
virtual machines
implemented in certain semiotic systems, primarily the brain, but also AI
computers.
Abstract:
A critical survey of some attempts to define 'computer',
beginning with some informal
ones (from reference books, and definitions due to H. Simon,
A.L. Samuel, and M. Davis), then critically evaluating those of three
philosophers (J.R. Searle,
P.J. Hayes, and G. Piccinini), and concluding with an examination of
whether the
brain and the universe are computers.
Abstract:
A reply to Bringsjord, Selmer (2018), "Logicist Remarks on Rapaport on
Philosophy of Computer Science+", American Philosophical Association
Newsletter
on Philosophy and Computers 18(1) (Fall): 28–31.
From the first paragraph:
The scholarly work on the philosophy of computer science
that most nearly achieves comprehensive coverage is the
"Philosophy of Computer Science" textbook, manifest as
an ever-growing resource online, by William J. Rapaport,
winner of both the Covey Award and the Barwise Prize in
2015. His former Ph.D. student, Robin K. Hill, interviews
him herein on that and related subjects.
Abstract:
A response to a recent critique by Cem Bozşahin of the theory of
syntactic
semantics as it applies to Helen Keller, and some applications of the
theory
to the philosophy of computer science.
Abstract:
Turner argues that computer programs must have purposes, that
implementation is not a kind of
semantics, and that computers might need to understand what they do.
I respectfully disagree: Computer programs need not have purposes,
implementation is a kind of semantic interpretation, and neither human
computers nor computing machines need to understand what they do.
Abstract:
Wang (2019) claims to define AI
in the sense of delimiting its research
area. But he offers a definition only of
'intelligence' (not of AI). And it is only a theory of what intelligence
is (artificial or otherwise). I offer and defend a
definition of AI as computational cognition.
Abstract:
A text based on
my course.
Abstract:
If qualia are mental, and if the mental is functional, then so are qualia.
But, arguably, qualia are not functional. A resolution of this is offered based
on a formal similarity between qualia and numbers. Just as certain sets "play
the role of" the number 3 in Peano's axioms, so a certain physical
implementation of a color plays the role of, say, red in a (computational)
cognitive agent's "cognitive economy".
Abstract:
Landgrebe and Smith's Why Machines Will Never Rule the World argues that
it is impossible for artificial general intelligence or the Singularity to
succeed, on the grounds that it is impossible to perfectly model or
emulate the
"complex" "human neurocognitive system". However, they do not show that
it is logically impossible; they only show that it is practically
impossible using current mathematical techniques. Nor do they prove that
there could not
be any other kinds of theories than those in current use. Furthermore,
even
if perfect theories were impossible or unlikely, such perfection may not
be
needed and may even be unhelpful. At most, they show that statistical,
"deep
learning" techniques by themselves will not suffice for artificial general
intelligence.
Abstract:
This is a draft of the "Yes" side of a proposed debate book, Will AI Match
(or
Even Exceed) Human Intelligence? (Routledge). The "No" position will be
taken by Selmer Bringsjord, and will be followed by rejoinders on each
side.
AI should be considered as the branch of computer science that
investigates
whether, and to what extent, cognition is computable. Computability is
a logical or mathematical notion. So, the only way to prove that
something—including (some aspect of) cognition—is
not computable is via a logical or
mathematical argument. Because no such argument has met with general
acceptance (in the way that other proofs of non-computability, such as the
Halting Problem, have been generally accepted), there is no logical reason
to think that AI won't eventually match human intelligence. Along the way,
I discuss the Turing Test as a measure of AI's success at showing the
computability
of various aspects of cognition, and I consider the potential roadblocks
set by consciousness, qualia, and mathematical intuition.
Abstract:
This study is the first stage of a long-term project whose chief goal is
the construction of a theory that can provide a foundation for a
semantics for natural language and an analysis of psychological
discourse. The project begins with a consideration of various data and
a re-examination of several problems in metaphysics, epistemology, and
semantics. As a means of sharpening our philosophical tools, I next take
a careful look at theories that have dealt with some of the data and
problems, chiefly Alexius
Meinong's
Theory of
Objects. The final stage
will be the devising of our own theory. Here, I report on the results
of the first two parts of the project.
(See also Rapaport 1984b.)
Dibrell, William (1988),
"Persons and the Intentional Stance",
Journal of Critical Analysis
9(1): 13–25.
Reprinted as "Models and Minds", Computer Science
Research Review (Buffalo: SUNY Buffalo Dept. of Computer Science,
1988–1989): 24–30.
Abstract:
What does it mean to understand language?
John Searle once said: "The
Chinese Room
shows what we knew all along: syntax by itself is not
sufficient for semantics. (Does anyone actually deny this point, I
mean straight out? Is anyone actually willing to say, straight out,
that they think that syntax, in the sense of formal symbols, is really
the same as semantic content, in the sense of meanings, thought
contents, understanding, etc.?)."
Elsewhere, I have argued "that
(suitable) purely syntactic symbol-manipulation of a computational
natural-language-understanding system's knowledge base suffices for it
to understand natural language." The fundamental thesis of the present
book is that understanding is recursive: "Semantic" understanding is
a correspondence between two domains; a cognitive agent understands one
of those domains in terms of an antecedently understood one. But how
is that other domain understood? Recursively, in terms of yet another.
But, since recursion needs a base case, there must be a domain that is
not understood in terms of another. So, it must be understood in terms
of itself. How? Syntactically! In syntactically understood domains,
some elements are understood in terms of others. In the case of
language, linguistic elements are understood in terms of non-linguistic
("conceptual") yet internal elements. Put briefly, bluntly, and a
bit paradoxically, semantic understanding is syntactic understanding.
Thus, any cognitive agent--human or computer--capable of syntax
(symbol manipulation) is capable of understanding language.
PDF
version
Abstract:
Syntactic semantics is a holistic, conceptual-role-semantic
theory of how computers can think.
But
Fodor & Lepore
have mounted a sustained attack on holistic
semantic theories.
However, their major problem with holism (that,
if holism is true, then
no two people can understand each other) can be fixed by means of
negotiating meanings. Syntactic
semantics and Fodor & Lepore's objections to holism are outlined;
the nature of communication, miscommunication, and negotiation is
discussed;
Bruner's ideas about the negotiation of meaning are explored; and
some observations on a problem for knowledge
representation in AI raised by Winston are presented.
Abstract:
"Contextual" vocabulary acquisition is the active, deliberate
acquisition
of a meaning for a word in a text by reasoning from textual clues and
prior
knowledge, including language knowledge and hypotheses developed from
prior
encounters with the word, but without external sources of help
such as dictionaries or people. But what is "context"? Is it just
the
surrounding text? Does it include the reader's background knowledge?
I argue that the appropriate context for contextual vocabulary
acquisition
is the reader's
"internalization" of the text "integrated" into
the reader's "prior" knowledge via belief revision.
Abstract:
"Context" is notoriously vague, and its uses multifarious. Researchers
in "contextual vocabulary acquisition" differ over the kinds of context
involved in vocabulary learning, and the methods and benefits thereof.
This talk presents a computational theory of contextual vocabulary
acquisition, identifies the relevant notion of context, exhibits the
assumptions behind some classic objections [due to Beck, McKeown, &
McCaslin 1983 and to Schatz & Baldwin 1986], and defends our theory
against these objections.
Abstract:
A medium-sized philosophical biography.
Abstract:
A sequel to
Rapaport 1999.
This essay explores the implications of the thesis that implementation
is semantic interpretation. Implementation is (at least) a ternary
relation: I is an implementation of an "Abstraction" A in some
medium M. Examples are presented from the arts, from language, from
computer science, and from cognitive science, where both brains and
computers can be understood as implementing a "mind Abstraction".
Implementations have side effects due to the implementing medium; these
can account for several puzzles surrounding qualia. Finally, a benign
argument for panpsychism is developed.
Abstract:
There are many branches of philosophy called "the philosophy of X",
where X = disciplines ranging from history to physics.
The philosophy of artificial intelligence has a long history,
and there are many courses and texts with that title.
Surprisingly,
the philosophy of computer science is not nearly as well-developed.
This article
proposes topics that might constitute the philosophy of computer
science and
describes a course covering those topics, along with suggested
readings and assignments.
Abstract:
This article describes the Turing Test for determining whether a
computer can
think. It begins with a description of an "imitation game" for
discriminating between a man and a woman, discusses variations of the
Test, standards for passing the Test, and experiments with real
Turing-like
tests (including Eliza and the Loebner competition).
It then considers what a computer must be able to do in order to
pass a Turing Test, including whether written linguistic behavior is a
reasonable replacement for "cognition", what counts as understanding
natural language, the role of world knowledge in understanding natural
language, and the philosophical implications of passing a Turing Test,
including whether passing is a sufficient demonstration of cognition,
briefly discussing two counterexamples: a table-lookup program and the
Chinese Room Argument.
Abstract:
A computer can come to
understand natural language the same way Helen Keller
did: by using "syntactic semantics"a theory of how
syntax can suffice for semantics, i.e., how
semantics for natural language can be provided by means of
computational symbol manipulation.
This essay
considers real-life approximations of Chinese Rooms, focusing on
Helen Keller's experiences growing up deaf and blind,
locked in a sort of
Chinese Room yet learning how to communicate
with the outside world.
Using the SNePS computational knowledge-representation system,
the essay analyzes Keller's belief that
learning that "everything has a name" was the key to her success,
enabling her to "partition" her mental concepts into mental
representations of: words, objects, and
the naming relations between them.
It next
looks at Herbert Terrace's theory of naming, which is akin to
Keller's, and which only humans are supposed to be capable of. The
essay suggests
that computers at least, and perhaps non-human primates, are also
capable of
this kind of naming.
Abstract:
The SNePS knowledge representation, reasoning, and acting system has
several features that facilitate metacognition
in SNePS-based agents. The most prominent is the fact that propositions
are represented in SNePS as terms
rather than as sentences, so that propositions can occur as arguments of
propositions and other expressions without
leaving first-order logic. The SNePS acting subsystem is integrated with
the SNePS reasoning subsystem in such
a way that: there are acts that affect what an agent believes; there are
acts that specify knowledge-contingent acts
and lack-of-knowledge acts; there are policies that serve as
"daemons", triggering acts when certain
propositions
are believed or wondered about. The GLAIR agent architecture supports
metacognition by specifying a location for
the source of self-awareness, and of a sense of situatedness in the
world. Several SNePS-based agents have taken
advantage of these facilities to engage in self-awareness and
metacognition.
Abstract:
Contextual vocabulary acquisition (CVA) is the active,
deliberate
acquisition of a meaning for an unknown word in a text by reasoning
from textual clues, prior knowledge, and hypotheses developed from
prior encounters with the word, but without external sources of help
such as dictionaries or people. Published strategies for doing CVA
vaguely and unhelpfully tell the reader to "guess".
AI algorithms for CVA can fill in the details that replace "guessing"
by "computing"; these details can then be converted to a curriculum
that can be taught to students to improve their reading comprehension.
Such algorithms also suggest a way out of the Chinese Room and show how
holistic semantics can withstand certain objections.
Abstract:
Is the brain a digital computer? Searle says that this is meaningless; I
say that it is an empirical question. Is the mind a computer program?
Searle says no; I say: properly understood, yes. Can the operations
of the brain be simulated on a digital computer? Searle says:
trivially yes; I say yes, but that it is not trivial.
Abstract:
Deliberate contextual vocabulary acquisition (CVA) is a
reader's ability to figure out a (not
"the") meaning for (not "of") an unknown word from its
"context", without external sources of help
such as dictionaries or people. The appropriate context for such CVA is
the "belief-revised
integration" of the reader's prior knowledge with
the reader's "internalization" of the
text.
We discuss unwarranted assumptions behind some classic objections to
CVA, and present
and defend a computational theory of CVA that we have adapted to a new
classroom
curriculum designed to help students use CVA to improve their reading
comprehension.
Abstract:
Ford's "Helen Keller Was Never in a Chinese Room"
claims that my argument in
"How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room"
fails
because Searle and I use the terms ‘syntax’ and
‘semantics’ differently, hence are at cross
purposes. Ford has misunderstood me; this reply clarifies my theory.
Abstract:
This essay presents and defends a triage theory of grading: An item to
be graded should get full credit
if and only if it is clearly or substantially correct, minimal credit if
and only if it is clearly or
substantially incorrect, and partial credit if and only if it is neither
of the above; no other (intermediate)
grades should be given. Details on how to implement this are provided,
and further issues in the
philosophy of grading (reasons for and against grading, grading on a
curve, and the subjectivity of
grading) are discussed.
Abstract:
In this reply to James H. Fetzer's "Minds and Machines:
Limits to Simulations of
Thought and Action", I argue that computationalism should not be
the view that (human)
cognition is computation, but that it should be the view that cognition
(simpliciter) is
computable. It follows that computationalism can be true even if (human)
cognition is
not the result of computations in the brain. I also argue that, if
semiotic systems are
systems that interpret signs, then both humans and computers are
semiotic systems.
Finally, I suggest that minds can be considered as virtual machines
implemented in
certain semiotic systems, primarily the brain, but also AI computers. In
doing so, I take
issue with Fetzer's arguments to the contrary.