# PUBLICATIONS

### (in chronological order)

 Last Update: Wednesday, 19 October 2022 Note: or material is highlighted

• For a list by category (books, journal articles, book chapters, etc.), see my vita.
• For a list by topic (AI, Chinese Room Argument, etc.), see "Publications of William J. Rapaport Arranged Topically"

Index:

1. Rapaport, William J. (1974), "Paper Folding and Convergent Sequences",   Mathematics Teacher 67: 453-457.

Abstract: Paper folding can help in understanding some infinite sequences and in finding their limits. A simple physical model useful at all levels of ability is presented, and infinite sequences of interest to senior high-school students are explored.

2. Rapaport, William J. (1976a), "On Cogito Propositions",   Philosophical Studies 29: 63-68.

• JSTOR version.

Abstract: I argue that George Nakhnikian's analysis of the logic of cogito propositions (roughly, Descartes's 'cogito' and 'sum') is incomplete. The incompleteness is rectified by showing that disjunctions of cogito propositions with contingent, non-cogito propositions satisfy conditions of incorrigibility, self-certifyingness, and pragmatic consistency; hence, they belong to the class of propositions with whose help a complete characterization of cogito propositions is made possible.

3. Rapaport, William J. (1976b), Intentionality and the Structure of Existence (Bloomington, IN: Indiana University Department of Philosophy)

Abstract: This study is the first stage of a long-term project whose chief goal is the construction of a theory that can provide a foundation for a semantics for natural language and an analysis of psychological discourse. The project begins with a consideration of various data and a re-examination of several problems in metaphysics, epistemology, and semantics. As a means of sharpening our philosophical tools, I next take a careful look at theories that have dealt with some of the data and problems, chiefly Alexius Meinong's Theory of Objects. The final stage will be the devising of our own theory. Here, I report on the results of the first two parts of the project.

In the first chapter, I present the data that are my starting point. These are taken from problems concerning the form of negative existential propositions, the truth of sentences with non-referring subject terms, the nature of genuine identity, the uniform nature of ordinary language with respect to fact and fiction, scientific language, the phenomenon of intentionality, and Fregean problems of sense and reference. By looking at the data and problems in their own contexts, I hope to understand better what Meinong and others were after when they put forth their theories. I also elucidate the nature of the problems of natural-language semantics and psychological discourse that are my goals, show why they are important, and discuss the inadequacies of current theories about them. I conclude with a list of criteria that any theory offered as a unifying solution to these problems will have to meet.

In the second chapter, I undertake a careful examination of Meinong's Theory of Objects, which I show adequate to the criteria of the previous chapter. I begin with a relatively informal exegesis of his principal themes and theses, pointing out some problems and tensions along the way. I then turn to a more formal presentation of the theory, revised in order to resolve some of its tensions and amended to take into account the data of the previous chapter. I also take a brief look at some of the historical precedents of Meinong's theory.

In the third chapter, I examine critically two other contemporary theories that satisfy the main theses of "Meinongian" theories (and are therefore adequate to the criteria of the first chapter). One, due to Terence Parsons, is an explicit attempt to reconstruct Meinong's theory and, thus, shares similar goals with the revised theory developed in the previous chapter, but it makes little or no attempt to fit such a theory to any external data. The other, due to Hector-Neri Castañeda, is an original construction of a theory "made to measure" for such data, thought not explicitly Meinongian.

In the final chapter, I take stock of the adequacies and inadequacies of my revision of Meinong's theory, for which I make three claims: It is a theory in the spirit of Meinong's (and, for historical purposes, accepts uncritically some of his assumptions); it is more coherent than the original, in that it can withstand the objections raised against Meinong's own version; and it takes into account more of the data, and in a more explicit fashion, than the original. I conclude with a discussion of some of the weaknesses of my system and make some suggestions about the direction of future research.

4. Rapaport, William J. (1977), "Adverbial Theories and Meinongian Theories", abstract of a colloquium presentation given at the 1977 Eastern Division meeting of the American Philosophical Association, Journal of Philosophy 74(10) (October): 635.

5. Rapaport, William J. (1978), "Meinongian Theories and a Russellian Paradox", Noûs 12: 153-180; errata, Noûs 13 (1979) 125.

Abstract: This essay re-examines Meinong's "Über Gegenstandstheorie" and undertakes a clarification and revision of it that is faithful to Meinong, overcomes the various objections to his theory, and is capable of offering solutions to various problems in philosophy of mind and philosophy of language. I then turn to a discussion of a historically and technically interesting Russell-style paradox (now known as "Clark's Paradox") that arises in the modified theory. I also examine the alternative Meinong-inspired theories of Hector-Neri Castañeda and Terence Parsons.

6. Rapaport, William J. (1979a), "An Adverbial Meinongian Theory", Analysis 39: 75-81.

Abstract: A fundamental assumption of Alexius Meinong's 1904 Theory of Objects is the act-content-object analysis of psychological experiences. I suggest that Meinong's theory need not be based on this analysis, but that an adverbial theory might suffice. I then defend the adverbial alternative against an objection raised by Roderick Chisholm, and conclude by presenting an apparently more serious objection based on a paradox discovered by Romane Clark.

7. Rapaport, William J. (1979b), "Interdisciplinary 'Informal Logic' Course", Informal Logic Newsletter 1 (May 1979) 6-7, and 2 (November 1979) 14; also contributed to Informal Logic Newsletter 1 (July 1979).

Abstract: A description of a double-credit, 2-semester course, "Effective Thinking and Communicating", part of SUNY Fredonia's General-Liberal Education Program, taught by faculty from English, Philosophy, Mathematics, and Education.

8. Rapaport, William J. (1979c), "An Algebraic Interpretation of Deontic Logic", abstract of an unpublished talk given at the 1978 Summer Meeting of the Association for Symbolic Logic (University of Wisconsin, Madison), Journal of Symbolic Logic 44(3) (September): 472.

• Manuscript of complete paper

Abstract: A syntactical phenomenon common to logics of commands, of questions, and some deontic logics is investigated using techniques of algebraic logic. The phenomenon is simple to describe. In terms of questions, the result of combining an indicative sentence (e.g., 'It is raining') with an interrogative sentence (e.g., 'Should I go home?') in (say) a conditional construction is an interrogative ('If it is raining, should I go home?'). Similarly combining an indicative with a command sentence (e.g., 'Go home!') results in a command sentence ('If it is raining, go home!'). In the deontic logic proposed by Hector-Neri Castañeda (in The Structure of Morality and in Thinking and Doing), the result of thus combining an indicative with a "practitive" is a practitive. These syntactical facts are reminiscent of scalar multiplication in vector spaces: The product of a scalar and a vector is a vector. However, neither vector spaces nor modules are general enough to serve as appropriate algebraic analogues of these logics. Taking the sentential (i.e., nonquantificational, nonmodal) fragment C of Castañeda's deontic logic as a paradigm, it is proposed that the relevant algebraic structure is a "dominance algebra" (DA), where <M, R, I, E> is a dominance algebra (over R) iff (i) R is an abstract algebra, (ii) M is not empty, (iii) non-empty I is a subset of M^{M^n} (n \in \omega), and (iv) non-empty E is a subset of M^{(B^n × M^m) \union (M^m × B^n)} (m, n \in \omega). (Modules are special cases of dominance algebras.) It is proved that the Lindenbaum algebra corresponding to C is a "double Boolean DA" (DBDA) (viz., one in which M and R are Boolean algebras), soundness and completeness theorems for C are obtained, and a representation theorem for DBDAs is proved. The paper concludes with some generalizations of DAs and some remarks on their relevance to Montague-style grammars.

9. Rapaport, William J. (1981), "How to Make the World Fit Our Language: An Essay in Meinongian Semantics", Grazer Philosophische Studien 14: 1-21.

Abstract: Natural languages differ from most formal languages in having a partial, rather than a total, semantic interpretation function; e.g., some noun phrases don't refer. The usual semantics for handling such noun phrases (e.g., Russell, Quine) require syntactic reform. The alternative presented here is semantic expansion, viz., enlarging the range of the interpretation function to make it total. A specific ontology based on Alexius Meinong's Theory of Objects, which can serve as domain of interpretation, is suggested, and related to the work of Hector-Neri Castañeda, Gottlob Frege, Jerrold J. Katz & Jerry Fodor, Terence Parsons, and Dana Scott.

10. Rapaport, William J. (1982), "Unsolvable Problems and Philosophical Progress", 1982 Prize Essay, American Philosophical Quarterly 19: 289-298.

Abstract: Philosophy has been characterized (e.g., by Benson Mates) as a field whose problems are unsolvable. This has often been taken to mean that there can be no progress in philosophy as there is in mathematics or science. The nature of problems and solutions is considered, and it is argued that solutions are always parts of theories, hence that acceptance of a solution requires commitment to a theory (as suggested by William Perry's scheme of cognitive development). Progress can be had in philosophy in the same way as in mathematics and science by knowing what commitments are needed for solutions. Similar views of Rescher and Castañeda are discussed.

11. Rapaport, William J. (1983), "Meinong, Defective Objects, and (Psycho-)Logical Paradox", Grazer Philosophische Studien 18: 17-39.

Abstract: Alexius Meinong developed notions of "defective objects" and "logical space" (in his On Emotional Presentation) in order to account for logical paradoxes (the Liar, Russell's) and psychological paradoxes (Mally's "self-presentation" paradox). These notions presage work by Herzberger and Kripke, but fail to do the job they were designed for. However, a technique implicit in Meinong's investigation is more successful and can be adapted to resolve a similar paradox discovered by Romane Clark in a revised version of Meinong's Theory of Objects. One family of paradoxes remain, but they are unavoidable and relatively harmless.

12. Rapaport, William J. (1984a), Critical study of Richard Routley's Exploring Meinong's Jungle and Beyond, Philosophy and Phenomenological Research 44: 539-552.

First paragraph: Richard Routley's Exploring Meinong's Jungle and Beyond is a lengthy work of wide scope, its cast of characters ranging from Abelard to Zeno. The nominal star is Meinong, yet the real hero is Reid. Topically, Routley presents us with a virtual encyclopedia of contemporary philosophy, containing original philosophical and logical analyses, as well as a valuable historical critique of Meinong's work.

13. Rapaport, William J., & Shapiro, Stuart C. (1984), "Quasi-Indexical Reference in Propositional Semantic Networks", Proceedings of the 10th International Conference on Computational Linguistics (COLING-84, Stanford University) (Morristown, NJ: Association for Computational Linguistics): 65-70.

Abstract: We discuss how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself. In particular, we examine the representation of first-person beliefs of others (e.g., the system's representation of a user's belief that he himself is rich). Such beliefs have as an essential component "quasi-indexical" pronouns (e.g., he himself'), and, hence, require for their analysis a method of representing these pronominal constructions and performing valid inferences with them. The theoretical justification for the approach to be discussed is the representation of nested "de dicto" beliefs (e.g., the system's belief that user-1 believes that system-2 believes that user-2 is rich). We discuss a computer implementation of these representations using the Semantic Network Processing System (SNePS) and an augmented-transition-network parser-generator with a question-answering capability. (A longer version of this paper appears as Rapaport 1986.)

14. Rapaport, William J. (1984b), "Can Philosophy Solve Its Own Problems?" The [SUNY] News 13 (May/June 1984) F2-F3.

15. Rapaport, William J. (1984), "From the Editor", American Philosophical Association Newsletter on Pre-College Instruction in Philosophy 1 (Spring/Summer): 1–2.

16. Rapaport, William J. (1984c), "Critical Thinking and Cognitive Development" [PDF], American Philosophical Association Newsletter on Pre-College Instruction in Philosophy 1 (Spring/Summer 1984) 4-5.

Abstract: At least one of the goals of philosophy education at all levels, but perhaps especially in elementary and secondary schools, ought to be the fostering of the students' development of analytical and critical thinking skills. This might come about in courses in general philosophy, in philosophy units that are parts of courses in other subjects, in ethics courses, or in courses explicitly devoted to critical logic. In this brief note, I call attention to William Perry's theory of cognitive development that, while it is most appropriate for college students, is also relevant to "pre-college" students; discuss its implications for critical thinking programs; and offer some suggestions for further reading for teachers concerned about these implications.

17. Rapaport, William J. (1984d), "Belief Representation and Quasi-Indicators", Technical Report 215 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This report shows how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself. In particular, it examines the representation of first-person beliefs of others (e.g., the system's representation of a user's belief that he himself is rich). Such beliefs have as an essential component "quasi-indexical pronouns" (e.g., 'he himself'), and, hence, require for their analysis a method of representing these pronominal constructions and performing valid inferences with them. The theoretical justification for the approach is the representation of nested "de dicto" beliefs (e.g., the system's belief that user-1 believes that system-2 believes that user-2 is rich). A computer implementation of these representations is provided, using the Semantic Network Processing System (SNePS) and an ATN parser-generator with a question-answering capability.

18. Rapaport, William J. (1984e), "Comments on Dibrell's ‘Persons and the Intentional Stance’", oral comments delivered at the Creighton Club/New York State Philosophical Association (April 1984; Cazenovia, NY).

• Comments on an early (1983), oral version of a paper later published as:
Dibrell, William (1988), "Persons and the Intentional Stance", Journal of Critical Analysis 9(1): 13–25.

• Summary: Dennett (Brainstorms) claims that a necessary condition for being a person is being the object of a intentional stance. Dibrell claims that really having intentionality is also necessary. I claim that Dibrell offers at best a weak argument for his position, that an argument can be given for something like his position, but that that position is consistent with Dennett's. Intentionality is special: In order for it to be possible for an entity to be treated as if it were intentional, it must simulate, hence actually have, intentionality.

19. Rapaport, William J. (1985a), "Meinongian Semantics for Propositional Semantic Networks", Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics (University of Chicago) (Morristown, NJ: Association for Computational Linguistics): 43-48.

Abstract: This paper surveys several approaches to semantic-network semantics that have not previously been treated in the AI or computational-linguistics literature, though there is a large philosophical literature investigating them in some detail. In particular, propositional semantic networks (exemplified by SNePS) are discussed, it is argued that only a fully intensional ("Meinongian") semantics is appropriate for them, and several Meinongian systems are presented.

20. Schagrin, Morton L.; Rapaport, William J.; & Dipert, Randall R. (1985), Logic: A Computer Approach (New York: McGraw-Hill).

Abstract: This text uses concepts from computer science to cover all the traditional topics in an introductory deductive logic course. However, in this book, unlike traditional uses of computers in education, we "explain" to the computer what is to be done. It is, thus, the student who has the active role in the learning process.

• Japanese translation, Logic and Algorithms, by Takemasa Ooya (Tokyo: McGraw-Hill Japan, 1986).

• Italian translation, Logica e Computer, by Gianfranco Forni, revised by Marco Colombetti (Milan: McGraw-Hill Libri Italia, 1986).

• Italian student edition, Logica e Computer, by Gianfranco Forni, adapted by Maria Cristina Valenti (Milan: McGraw-Hill Libri Italia, Divisione scuola, 1986).

• reprinted in Taiwan.

21. Rapaport, William J. (1985c), "To Be and Not To Be", Noûs 19: 255-271.

Abstract: Terence Parsons's informal theory of intentional objects, their properties, and modes of predication does not adequately reflect ordinary ways of speaking and thinking. Meinongian theories recognizing two modes of predication are defended against Parsons's theory of two kinds of properties. Against Parsons's theory of fictional objects, I argue that no existing entities appear in works of fiction. A formal version of Parsons's theory is presented, and a curious consequence about modes of predication is indicated.

22. Rapaport, William J. (1985d), "Machine Understanding and Data Abstraction in Searle's Chinese Room", Proceedings of the 7th Annual Conference of the Cognitive Science Society (University of California at Irvine) (Hillsdale, NJ: Lawrence Erlbaum Associates): 341-345.

Abstract: In Searle's Chinese-Room Argument, he says that "only something having the same causal powers as brains can have intentionality", but he does not specify what these causal powers are, and this is the biggest gap in his argument. In his book, Intentionality, he says that "mental states are both caused by the operations of the brain and realized in the structure of the brain". A careful analysis of these two notions reveals (1) what the requisite causal powers are, (2) what's wrong with his claim about mental states, and (2) what is wrong with his overall argument. (A longer version of this paper appears as Rapaport 1988a.)

23. Rapaport, William J. (1985/1986), "Non-Existent Objects and Epistemological Ontology", Grazer Philosophische Studien 25/26: 61-95.

Abstract: This essay examines the role of non-existent objects in "epistemological ontology"--the study of the entities that make thinking possible. An earlier revision of Meinong's Theory of Objects is reviewed, Meinong's notions of Quasisein and Aussersein are discussed, and a theory of Meinongian objects as "combinatorially possible" entities is presented.

24. Rapaport, William J. (1986a), "Philosophy, Artificial Intelligence, and the Chinese-Room Argument", Abacus: The Magazine for the Computer Professional 3 (Summer 1986) 6-17; correspondence, Abacus 4 (Winter 1987) 6-7, Abacus 4 (Spring 1987) 5-7.

25. Rapaport, William J. (1986b), "Searle's Experiments with Thought", Philosophy of Science 53: 271–279.

Abstract: A critique of several recent objections to John Searle's Chinese-Room Argument against the possibility of "strong AI" is presented. The objections are found to miss the point, and a stronger argument against Searle is presented, based on a distinction between "syntactic" and "semantic" understanding.

26. Rapaport, William J. (1986c), "Philosophy of Artificial Intelligence: A Course Outline", Teaching Philosophy 9: 103-120.

First paragraph: In Fall 1983, I offered a junior/senior-level course in Philosophy of Artificial Intelligence, in the Department of Philosophy at SUNY Fredonia, after returning there from a year's leave to study and do research in computer science and artificial intelligence at SUNY Buffalo. Of the 30 students enrolled, most were computer-science majors, about a third had no computer background, and only a handful had studied any philosophy. This article describes that course, provides material for use in such a course (including an "Artificial IQ" test), and offers a bibliography of relevant articles in the AI, cognitive science, and philosophical literature.

27. Shapiro, Stuart C., & Rapaport, William J. (1986), "SNePS Considered as a Fully Intensional Propositional Semantic Network", Proceedings of the 5th National Conference on Artificial Intelligence (AAAI-86, Philadelphia) (Los Altos, CA: Morgan Kaufmann), Vol. 1, pp. 278-283.

Abstract: We present a formal syntax and semantics for SNePS considered as the (modeled) mind of a cognitive agent. The semantics is based on a Meinongian theory of the intensional objects of thought that is appropriate for AI considered as "computational philosophy" or "computational psychology".

28. Wiebe, Janyce M., & Rapaport, William J. (1986), "Representing De Re and De Dicto Belief Reports in Discourse and Narrative", Special Issue on Knowledge Representation, Proceedings of the IEEE 74: 1405-1413.

Abstract: Belief reports can be interpreted de re or de dicto; we investigate the disambiguation of belief reports as they appear in discourse and narrative. In earlier work, representations for de re and de dicto belief reports were presented, and the distinction between de re and de dicto belief reports was made solely on the basis of their representations. This analysis is sufficient only when belief reports are considered in isolation. We need to consider more complicated belief structures, in addition to those presented earlier, in order to sufficiently represent de re and de dicto belief reports as they appear in discourse and narrative. Further, we cannot meaningfully apply one, but not the other, of the concepts de re and de dicto to these more complicated belief structures. We argue that the concepts de re and de dicto do not apply to an agent's conceptual representation of her beliefs, but that they apply to the utterance of a belief report on a specific occasion. A cognitive agent interprets a belief report such as "S believes that N is F", or "S said, N is F'" (where S and N are names or descriptions, and F is an adjective) de dicto if she interprets it from N's perspective, and she interprets it de re if she interprets it from her own perspective.

29. Rapaport, William J. (1986d), "Logical Foundations for Belief Representation", Cognitive Science 10: 371-422.

Abstract: This essay presents a philosophical and computational theory of the representation of de re, de dicto, nested, and quasi-indexical belief reports expressed in natural language. The propositional Semantic Network Processing System (SNePS) is used for representing and reasoning about these reports. In particular, quasi-indicators (indexical expressions occurring in intentional contexts and representing uses of indicators by another speaker) pose problems for natural-language representation and reasoning systems, because--unlike pure indicators--they cannot be replaced by coreferential NPs without changing the meaning of the embedding sentence. Therefore, the referent of the quasi-indicator must be represented in such a way that no invalid coreferential claims are entailed. The importance of quasi-indicators is discussed, and it is shown that all four of the above categories of belief reports can be handled by a single representational technique using belief spaces containing intensional entities. Inference rules and belief-revision techniques for the system are also examined. (A shorter version of this paper appeared as Rapaport & Shapiro 1984. Both are based on my SUNY Buffalo M.S. thesis.)

30. Hardt (now Loeb), Shoshana H., & Rapaport, William J. (eds.) (1986), "Recent and Current Artificial Intelligence Research in the Department of Computer Science, SUNY Buffalo", AI Magazine 7 (Summer 1986) 91-100.

Abstract: This article contains reports from the various research groups in the SUNY Buffalo Department of Computer Science, Vision Group, and Graduate Group in Cognitive Science. It is organized by the different research topics. However, it should be noted that the individual projects might also be organized around the methodologies and tools used in the research, and, of course, many of the projects fall under more than one category.

31. Rapaport, William J. (1986e), Review of Karel Lambert, Meinong and the Principle of Independence: Its Place in Meinong's Theory of Objects and Its Significance in Contemporary Philosophical Logic (Cambridge, UK: Cambridge University Press, 1983), Journal of Symbolic Logic 51: 248-252.

32. Rapaport, William J. (1986f), Review of Deborah G. Johnson and John W. Snapper (eds.), Ethical Issues in the Use of Computers (Belmont, CA: Wadsworth, 1985), Teaching Philosophy 9(3): 275-278.

33. Rapaport, William J. (1986g), "Pre-College Instruction in Philosophy" (editor's column), American Philosophical Association Newsletter on Teaching Philosophy (Fall 1986) 11-12.

34. Rapaport, William J.; Shapiro, Stuart C.; & Wiebe, Janyce M. (1986), "Quasi-Indicators, Knowledge Reports, and Discourse", Technical Report 86-15 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This paper presents a computational analysis of de re, de dicto, and de se belief and knowledge reports. The analysis solves a problem first observed by Castañeda, namely, that the simple rule that (A knows that P) implies P does not hold if P contains a quasi-indicator. A single rule is presented, in the context of an AI representation and reasoning system, that holds for all propositions P, including quasi-indexical ones. In so doing, the importance of representing proper names explicitly is demonstrated, and support is provided for the necessity of considering sentences in the context of extended text (e.g., discourse or narrative) in order to fully capture certain features of their semantics.

35. Bruder, Gail A.; Duchan, Judith F.; Rapaport, William J.; Segal, Erwin M.; Shapiro, Stuart C.; & Zubin, David A. (1986), "Deictic Centers in Narrative: An Interdisciplinary Cognitive-Science Project", Technical Report 86-20 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This research program consists of a group of projects whose goals are to develop a psychologically real model of a cognitive agent's comprehension of deictic information in narrative text. We will test the hypothesis that the construction and modification of a "deictic center"--the locus in conceptual space-time of the characters, objects, and events depicted by the sentences currently being perceived--is important for comprehension. To test this hypothesis, we plan to develop a computer system that will "read" a narrative and answer questions concerning the agent's beliefs about the objects, relations, and events in the narrative. The final system will be psychologically real, because the details of the algorithms and the efficacy of the linguistic devices will be validated by psychological experiments on normal and abnormal comprehenders. This project will lead to a better understanding of how people comprehend narrative text, it will advance the state of machine understanding, and it will provide insight into the nature of comprehension disorders and their potential remediation.

36. Rapaport, William J. (1987), "God, the Demon, and the Cogito".

Abstract: The purpose of this essay is to exhibit in detail the setting for the version of the Cogito Argument that appears in Descartes's Mediations. I believe that a close rading of the text can shed new light on the nature and role of the "evil demon", on the nature of God as he appears in the first few Meditations, and on the place of the Cogito Argument in Descartes's overall scheme.

37. Rapaport, William J. (1986/1987), "A Computational Theory of Natural-Language Understanding", Computer Science Research Review (Buffalo: SUNY Buffalo Department of Computer Science): 25-31.

Abstract: We are undertaking the design and implementation of a computer system that can parse English sentences containing terms that the system does not "know" (i.e., that are not in the system's lexicon), build a semantic-network representation of these sentences, and express its understanding of the newly acquired terms by generating English sentences from the resulting semantic-network database. The system will be a modification of the natural-language processing capabilities of the SNePS Semantic Network Processing System. It is intended to test the thesis that symbol-manipulating systems (computers) can "understand" natural language.

38. Rapaport, William J. (1987a), "Belief Systems", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence (New York: John Wiley): 63-73.

Description: A belief system may be understood as a set of beliefs together with a set of implicit or explicit procedures for acquiring new beliefs. Topics covered in this survey article: Reasons for Studying Such Systems, Types of Theories, Philosophical Background, Surveys of Theories and Systems. (Revised as Rapaport 1992a.)

39. Rapaport, William J. (1987b), "Logic", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence (New York: John Wiley): 536-538.

Description: Topics covered in this brief survey article: The Nature of Logic, Systems of Logic, Logic and Artificial Intelligence, Guide to Logic Articles in this Encyclopedia. (Revised as Rapaport 1992b.)

40. Rapaport, William J. (1987c), "Logic, Predicate", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence (New York: John Wiley): 538-544.

Description: Topics covered in this introductory article: The Language of Predicate Logic, Deductive Systems of Predicate Logic, Extensions of Predicate Logic, Metatheoretic Results. (Revised as Rapaport 1992c.)

41. Rapaport, William J. (1987d), "Logic, Propositional", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence (New York: John Wiley): 558-563.

Description: Topics covered in this introductory article: Language of Propositional Logic, Deductive Systems of Propositional Logic. (Revised as Rapaport 1992d.)

42. Shapiro, Stuart C., & Rapaport, William J. (1987), "SNePS Considered as a Fully Intensional Propositional Semantic Network", in Nick Cercone & Gordon McCalla (eds.), The Knowledge Frontier: Essays in the Representation of Knowledge (New York: Springer-Verlag): 262-315.

Abstract: SNePS, the Semantic Network Processing System, is a semantic-network language with facilities for building semantic networks to represent virtually any kind of information, retrieving information from them, and performing inference with them. Users can interact with SNePS in a variety of interface languages, including a Lisp-like user language, a menu-based screen-oriented editor, a graphics-oriented editor, a higher-order-logic language, and an extendible fragment of English. This article discusses the syntax and semantics for SNePS considered as an intensional knowledge representation system, and provides examples of uses of SNePS for cognitive modeling, database management, pattern recognition, expert systems, belief revision, and computational linguistics.

43. Rapaport, William J. (ed.) (1987e), "SUNY at Buffalo Department of Computer Science Graduate Student Open House", SIGART Newsletter No. 99 (January 1987) 22-24.

44. Rapaport, William J. (ed.) (1987f), "Second Annual SUNY Buffalo Graduate Conference on Computer Science", SIGART Newsletter No. 100 (April 1987) 23-25.

45. Rapaport, William J. (1987g), "Philosophy for Children and Other People", American Philosophical Association Newsletter on Teaching Philosophy (Summer 1987) 19-22.

46. Shapiro, Stuart C.; & Rapaport, William J. (1987), "Knowledge Representation for Natural Language Processing", Presentations from the 1987 Natural Language Planning Workshop (Minnowbrook Conference Center, Blue Mountain Lake, NY) (Rome Air Development Center, Griffis Air Force Base, NY: NAIC [Northeast Artificial Intelligence Consortium] Technical Report Series): 56–77.

47. Srihari, Sargur N.; Rapaport, William J.; & Kumar, Deepak (1987), "On Knowledge Representation Using Semantic Networks and Sanskrit", Technical Report 87-03 (Buffalo: SUNY Buffalo Department of Computer Science).

• Reprinted as Technical Report TR-8741 (Syracuse: Northeast Artificial Intelligence Consortium, Syracuse University, 1988): 2B-37-2B-51.

Abstract: We give an overview of natural-language understanding and machine translation of natural languages using the SNePS semantic network processing system and examine the use of Sanskrit grammarians' analyses as a knowledge-representation technique.

48. Rapaport, William J.; Wiebe, Janyce M.; & Dipert, Randall R. (1987/1988), "Intensional Knowledge Representation", Computer Science Research Review (Buffalo: SUNY Buffalo Department of Computer Science): 48-63.

Abstract: This article discusses intensional knowledge representation and reasoning as a foundation for modeling, understanding, and expressing the cognitive attitudes of intelligent agents. In particular, we are investigating both "representational" and "pragmatic" issues: The representational issues include (1) the design of representations rich enough to support the interpretation and generation of referring expressions in opaque (i.e., intensional) contexts, to be accomplished by means of structured individuals and the notion of "belief spaces", and (2) the design of representations rich enough to support the use of intentions and practitions for representing and reasoning about action. The pragmatic issues include the recognition of a speaker's intentions (for interpreting referring expressions in opaque contexts) and the generation of referring expressions in opaque contexts based on the intentions of the cognitive agent. This pragmatic part of the overall project uses the results obtained from our representational work on intentions and practitions. The research is of significance for natural-language processing and computational models of cognition and action.

49. Wiebe, Janyce M., & Rapaport, William J. (1988), "A Computational Theory of Perspective and Reference in Narrative", Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics (SUNY Buffalo) (Morristown, NJ: Association for Computational Linguistics): 131-138.

Abstract: Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference in narrative is addressed: References in passages told from the perspective of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented.

50. Peters, Sandra M.; Shapiro, Stuart C.; & Rapaport, William J. (1988), "Flexible Natural Language Processing and Roschian Category Theory" Proceedings of the 10th Annual Conference of the Cognitive Science Society (Montreal) (Hillsdale, NJ: Lawrence Erlbaum Associates): 125-131.

Abstract: AI systems typically hand-craft large amounts of knowledge in complex, static, high-level knowledge structures that generally work well in limited domains but that are too rigid to support natural-language understanding because, in part, AI natural-language-processing systems have not taken seriously Rosch's principles of categorization. Thus, such systems have very shallow representations of generic concepts and categories. We discuss the inadequacy of systems based on passive data structures with slots and explicit default values (frames, schemata, scripts), arguing that they lack the flexibility, generality, and adaptability necessary for representing generic concepts in memory. We present alternative "active" representations that are constructed as needed from a less organized semantic memory, whose construction can be influenced by the current task and context. Our implementation uses SNePS and a generalized ATN parser-generator.

51. Nakhimovsky, Alexander, & Rapaport, William J. (1988), "Discontinuities in Narratives", Proceedings of the 12th International Conference of Computational Linguistics (COLING-88, Budapest): 465-470.

Abstract: This paper is concerned with heuristics for segmenting narratives into units that form the basic elements of discourse representations and that constrain the application of focusing algorithms. The following classes of discontinuities are identified: figure-ground, space, time, perspective, and topic. It is suggested that rhetorical relations between narrative units are macro labels that stand for frequently occurring clusters of discontinuities. Heuristics for identifying discontinuities are presented and illustrated in an extended example.

52. Rapaport, William J. (1988a), "To Think or Not To Think", Noûs 22: 585-609.

53. Rapaport, William J. (1988b), "Syntactic Semantics: Foundations of Computational Natural-Language Understanding", in James H. Fetzer (ed.), Aspects of Artificial Intelligence (Dordrecht, The Netherlands: Kluwer Academic Publishers): 81-131.

Abstract: This essay considers what it means to understand natural language and whether a computer running an artificial-intelligence program designed to understand natural language does in fact do so. It is argued that a certain kind of semantics is needed to understand natural language, that this kind of semantics is mere symbol manipulation (i.e., syntax), and that, hence, it is available to AI systems. Recent arguments by Searle and Dretske to the effect that computers cannot understand natural language are discussed, and a prototype natural-language-understanding system is presented as an illustration.

54. Rapaport, William J. (1988c), Review of Joseph Y. Halpern (ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference (Los Altos, CA: Morgan Kaufmann, 1986), Journal of Symbolic Logic 53: 660-670.

55. Rapaport, William J. (1988d), Review of Michael Devitt & Kim Sterelny, Language and Reality (Cambridge, MA: MIT Press, 1987), and of Robert M. Martin, The Meaning of Language (Cambridge, MA: MIT Press, 1987), Computational Linguistics 14: 108-113.

56. Rapaport, William J., & the SNePS Research Group (1988), "A Knowledge-Representation Challenge for SNePS", SNeRG Technical Note No. 20 (Buffalo: SNePS Research Group, SUNY Buffalo Department of Computer Science).

Abstract: Stuart C. Shapiro challenged the members of SNeRG to "come up with a defensible representation for the information in" a paragraph due to Beverly Woolf, describing an experiment in naive physics. This note reports our reply to the challenge.

57. Roberts, Lawrence D., & Rapaport, William J. (1988), "Quantifier Order, Reflexive Pronouns, and Quasi-Indexicals", Technical Report 88-16 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This report consists of three papers: "Quantifier Order, Reflexive Pronouns, and Quasi-Indexicals", by Lawrence D. Roberts, "Reflections on Reflexives and Quasi-Indexicals" (comments on Roberts's paper), by William J. Rapaport, and Roberts's reply. They were originally presented at the Colloquium on Philosophy of Language at the American Philosophical Association Eastern Division meeting in New York City, December 1987.

Roberts's thesis is that reflexive pronouns do not merely affect reference, but also form intermediate propositional functions or verb phrases that are reflexive. This thesis is defended, first, on the basis of its results for 'only'-statements and for sets collected by reflexive propositional functions and, second, on the basis of its economy in assimilating the middle voice to reflexive propositional functions, and in providing parallel accounts of active, passive, and middle voices. The third support of the thesis is its usefulness in replying to a counterexample by Robert M. Adams to Hector-Neri Castañeda's doctrine of quasi-indexical 'he'.

Rapaport's comments argue that Roberts's observations about reflexives depend on an unwarranted assumption concerning the relation between predicate logic and English sentences, and it provides an alternative solution to the puzzle about quasi-indexicals, based on their computational interpretation in the SNePS knowledge-representation and reasoning system.

58. Shapiro, Stuart C. & Rapaport, William J. (1988, March), "Models and Minds: A Reply to Barnden", Northeast Artificial Intelligence Consortium Technical Report TR-8737 (Syracuse: Syracuse University).

Reprinted as "Models and Minds", Computer Science Research Review (Buffalo: SUNY Buffalo Dept. of Computer Science, 1988–1989): 24–30.

Incorporated into Shapiro & Rapaport 1991, below.

59. Srihari, Rohini K., & Rapaport, William J. (1989), "Extracting Visual Information From Text: Using Captions to Label Human Faces in Newspaper Photographs", Proceedings of the 11th Annual Conference of the Cognitive Science Society (Ann Arbor, MI) (Hillsdale, NJ: Lawrence Erlbaum Associates): 364-371.

Abstract: There are many situations where linguistic and pictorial data are jointly presented to communicate information. A computer model for synthesizing information from the two sources requires an initial interpretation of both the text and the picture, followed by consolidation of information. The problem of performing general-purpose vision (without apriori knowledge) would make this a nearly impossible task. However, in some situations, the text describes salient aspects of the picture. In such situations, it is possible to extract visual information from the text, resulting in a relational graph describing the structure of the accompanying picture. This graph can then be used by a computer vision system to guide the interpretation of the picture. This paper discusses an application whereby information obtained from parsing a caption of a newspaper photograph is used to identify human faces in the photograph. Heuristics are described for extracting information from the caption that contributes to the hypothesized structure of the picture. The top-down processing of the image using this information is discussed.

60. Rapaport, William J.; Segal, Erwin M.; Shapiro, Stuart C.; Zubin, David A.; Bruder, Gail A.; Duchan, Judith F.; Almeida, Michael J.; Daniels, Joyce H.; Galbraith, Mary M.; Wiebe, Janyce M.; & Yuhan, Albert Hanyong (1989), "Deictic Centers and the Cognitive Structure of Narrative Comprehension" (PDF), Technical Report 89-01 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This paper discusses the theoretical background and the preliminary results of an interdisciplinary, cognitive-science research project on the comprehension of narrative text. The unifying theme of our work has been the notion of a deictic center: a mental model of spatial, temporal, and character information contributed by the reader of the narrative and used by the reader in understanding the narrative. We examine the deictic center in the light of our investigations from the viewpoints of linguistics, cognitive psychology, individual differences (language pathology), literary theory of narrative, and artificial intelligence.

61. Rapaport, William J.; Segal, Erwin M.; Shapiro, Stuart C.; Zubin, David A.; Bruder, Gail A.; Duchan, Judith F.; & Mark, David M. (1989), "Cognitive and Computer Systems for Understanding Narrative Text", Technical Report 89-07 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This project continues our interdisciplinary research into computational and cognitive aspects of narrative comprehension. Our ultimate goal is the development of a computational theory of how humans understand narrative texts. The theory will be informed by joint research from the viewpoints of linguistics, cognitive psychology, the study of language acquisition, literary theory, geography, philosophy, and artificial intelligence. The linguists, literary theorists, and geographers in our group are developing theories of narrative language and spatial understanding that are being tested by the cognitive psychologists and language researchers in our group, and a computational model of a reader of narrative text is being developed by the AI researchers, based in part on these theories and results and in part on research on knowledge representation and reasoning. This proposal describes the knowledge-representation and natural-language-processing issues involved in the computational implementation of the theory; discusses a contrast between communicative and narrative uses of language and of the relation of the narrative text to the story world it describes; investigates linguistic, literary, and hermeneutic dimensions of our research; presents a computational investigation of subjective sentences and reference in narrative; studies children's acquisition of the ability to take third-person perspective in their own storytelling; describes the psychological validation of various linguistic devices; and examines how readers develop an understanding of the geographical space of a story. This report is a longer version of a project description submitted to NSF.

62. Peters, Sandra M., & Rapaport, William J. (1990), "Superordinate and Basic Level Categories in Discourse: Memory and Context", Proceedings of the 12th Annual Conference of the Cognitive Science Society (Cambridge, MA) (Hillsdale, NJ: Lawrence Erlbaum Associates): 157-165.

Abstract: Representations for natural category systems and a retrieval-based framework are presented that provide the means for applying generic knowledge about the semantic relationships between entities in discourse and the relative salience of these entities imposed by the current context. An analysis of the use of basic- and superordinate-level categories in discourse is presented, and the use of our representations and processing in the task of discourse comprehension is demonstrated.

63. Srihari, Rohini K., & Rapaport, William J. (1990), "Combining Linguistic and Pictorial Information: Using Captions to Interpret Newspaper Photographs", in Deepak Kumar (ed.), Current Trends in SNePS--Semantic Network Processing System, Lecture Notes in Artificial Intelligence, No. 437 (Berlin: Springer-Verlag): 85-96.

Abstract: There are many situations where linguistic and pictorial data are jointly presented to communicate information. A computer model for synthesizing information from the two sources requires an initial interpretation of both the text and the picture, followed by consolidation of information. The problem of performing general-purpose vision (without apriori knowledge) would make this a nearly impossible task. However, in some situations, the text describes salient aspects of the picture. In such situations, it is possible to extract visual information from the text, resulting in a relational graph describing the structure of the accompanying picture. This graph can then be used by a computer vision system to guide the interpretation of the picture. This paper discusses an application whereby information obtained from parsing a caption of a newspaper photograph is used to identify human faces in the photograph. Heuristics are described for extracting information from the caption, which contributes to the hypothesized structure of the picture. The top-down processing of the image using this information is discussed.

64. Rapaport, William J. (1990a), "Representing Fiction in SNePS", in Deepak Kumar (ed.), Current Trends in SNePS--Semantic Network Processing System, Lecture Notes in Artificial Intelligence, No. 437 (Berlin: Springer-Verlag): 107-121.

Abstract: This paper discusses issues in the representation of fictional entities and the representation of propositions from fiction, using SNePS. It briefly surveys four philosophical ontological theories of fiction and sketches an epistemological theory of fiction (to be implemented in SNePS) using a story operator and rules for allowing propositions to "migrate" into and out of story "spaces".

65. Rapaport, William J. (1990b), "Computer Processes and Virtual Persons: Comments on Cole's Artificial Intelligence and Personal Identity"', Technical Report 90-13 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: This is a draft of the written version of comments on a paper by David Cole, presented orally at the American Philosophical Association Central Division meeting in New Orleans, 27 April 1990. Following the written comments are 2 appendices: One contains a letter to Cole updating these comments. The other is the handout from the oral presentation.

In general, I am sympathetic to Cole's arguments; my comments seek to clarify and extend the issues. Specifically, I argue that, in Searle's celebrated Chinese-Room Argument, Searle-in-the-room does understand Chinese, in spite of his claims to the contrary. He does this in the sense that he is executing a computer "process" that can be said to understand Chinese. (The argument that the process in fact does understand Chinese is made elsewhere; here, I merely assume that if anything understands Chinese, it is a "process" executed by Searle-in-the-room.) I also show, by drawing an analogy between the way that I add numbers in my head and the way that a calculator adds numbers, that Searle-in-the-room's claim that he does not understand Chinese does not contradict the fact that, by executing the Chinese-natural-language-understanding algorithm, he does understand Chinese.

66. Rapaport, William J. (1991a), "Predication, Fiction, and Artificial Intelligence", Topoi 10: 79-111.

Abstract: This paper describes the SNePS knowledge-representation and reasoning system. SNePS is an intensional, propositional, semantic-network processing system used for research in AI. We look at how predication is represented in such a system when it is used for cognitive modeling and natural-language understanding and generation. In particular, we discuss issues in the representation of fictional entities and the representation of propositions from fiction, using SNePS. We briefly survey four philosophical ontological theories of fiction and sketch an epistemological theory of fiction (implemented in SNePS) using a story operator and rules for allowing propositions to "migrate" into and out of story "spaces". (First half revised and expanded as Shapiro & Rapaport 1995, below.)

67. Rapaport, William J. (1991b), "Meinong, Alexius; I: Meinongian Semantics", in Hans Burkhardt & Barry Smith (eds.), Handbook of Metaphysics and Ontology (Munich: Philosophia Verlag): 516-519.

68. Shapiro, Stuart C., & Rapaport, William J. (1991), "Models and Minds: Knowledge Representation for Natural-Language Competence", in Robert Cummins & John Pollock (eds.), Philosophy and AI: Essays at the Interface (Cambridge, MA: MIT Press): 215-259.

Abstract: Cognitive agents, whether human or computer, that engage in natural-language discourse and that have beliefs about the beliefs of other cognitive agents must be able to represent objects the way they believe them to be and the way they believe others believe them to be. They must be able to represent other cognitive agents both as objects of beliefs and as agents of beliefs. They must be able to represent their own beliefs, and they must be able to represent beliefs as objects of beliefs. These requirements raise questions about the number of tokens of the belief representation language needed to represent believers and propositions in their normal roles and in their roles as objects of beliefs. In this paper, we explicate the relations among nodes, mental tokens, concepts, actual objects, concepts in the belief spaces of an agent and the agent's model of other agents, concepts of other cognitive agents, and propositions. We extend, deepen, and clarify our theory of intensional knowledge representation for natural-language processing, as presented in previous papers and in light of objections raised by others. The essential claim is that tokens in a knowledge-representation system represent only intensions and not extensions. We are pursuing this investigation by building CASSIE, a computer model of a cognitive agent and, to the extent she works, a cognitive agent herself. CASSIE's mind is implemented in the SNePS knowledge-representation and reasoning system.

69. Rapaport, William J. (guest editor) (1991c), Special Issue on Cognitive Science and Artificial Intelligence, Noûs, Vol. 25, No. 4.

70. Rapaport, William J. (1991d), "The Inner Mind and the Outer World: Guest Editor's Introduction", Special Issue on Cognitive Science and Artificial Intelligence, Noûs, 25: 405-410.

Abstract: It is well known that people from other disciplines have made significant contributions to philosophy and have influenced philosophers. It is also true (though perhaps not often realized, since philosophers are not on the receiving end, so to speak) that philosophers have made significant contributions to other disciplines and have influenced researchers in these other disciplines, sometimes more so than they have influenced philosophy itself. But what is perhaps not as well known as it ought to be is that researchers in other disciplines, writing in those other disciplines' journals and conference proceedings, are doing philosophically sophisticated work, work that we in philosophy ignore at our peril. Work in cognitive science and artificial intelligence (AI) often overlaps such paradigmatic philosophical specialties as logic, the philosophy of mind, the philosophy of language, and the philosophy of action. This special issue offers a sampling of research in cognitive science and AI that is philosophically relevant and philosophically sophisticated.

71. Rapaport, William J. (1992a), "Belief Representation Systems" [PDF], in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence, 2nd edition (New York: John Wiley): 98-110.

72. Rapaport, William J. (1992b), "Logic", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence, 2nd edition (New York: John Wiley): 851-853

73. Rapaport, William J. (1992c), "Logic, Predicate", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence, 2nd edition (New York: John Wiley): 866-873

74. Rapaport, William J. (1992d), "Logic, Propositional", in Stuart C. Shapiro (ed.), Encyclopedia of Artificial Intelligence, 2nd edition (New York: John Wiley): 891-897

75. Shapiro, Stuart C., & Rapaport, William J. (1992a), "The SNePS Family", Computers and Mathematics with Applications, invited special issue, Vol. 23: 243-275.

Abstract: SNePS, the Semantic Network Processing System, is an intensional propositional semantic network that has been designed to be the mind of a computational cognitive agent. In this article, the main features of SNePS are sketched, its antecedents are discussed, and some example current uses are described.

76. Shapiro, Stuart C., & Rapaport, William J. (1992b), "A Fully Intensional Propositional Semantic Network", in Leslie Burkholder (ed.), Philosophy and the Computer (Boulder, CO: Westview Press): 75-91.

Abstract: Revised version of Shapiro & Rapaport 1986, above. We present a formal syntax and semantics for the SNePS Semantic Network Processing System, based on a Meinongian theory of the intensional objects of thought. Such a theory avoids possible worlds and is appropriate for AI considered as "computational philosophy"--AI as the study of how intelligence is possible--or "computational psychology"--AI with the goal of writing programs as models of human cognitive behavior. Recently, SNePS has been used for a variety of AI research and application projects. These are described in Shapiro & Rapaport 1987, of which the present paper is a much shortened version. Here, we use SNePS to model (or construct) the mind of a cognitive agent, referred to as Cassie (the Cognitive Agent of the SNePS System--an Intelligent Entity).

77. Ehrlich, Karen, & Rapaport, William J. (1992), "Automatic Acquisition of Word Meanings from Natural-Language Contexts", Technical Report 92-03 (Buffalo: SUNY Buffalo Center for Cognitive Science).

Abstract: We are developing a computational theory of a cognitive agent's ability to acquire word meanings from natural-language contexts, especially from narrative. The meaning of a word as understood by such an agent is taken to beits relation to the meanings of other words in a highly interconnected network representing the agent's knowledge. However, because such knowledge is very idiosyncratic, we are researching the means by which an agent can abstract conventional definitions from its individual experiences with a word. We are investigating the nature of information necessary to the production of such conventional definitions, and the processes of revising hypothesized definitions in the light of successive encounters with a word. The theory is being tested by implementing it in a knowledge-representation and reasoning system with facilities both for parsing and generating fragments of natural language (English) and for reasoning and belief revision. Potential applications include education, computational lexicography, and cognitive science studies of narrative understanding.

78. Rapaport, William J. (1993a), "Cognitive Science", in Anthony Ralston & Edwin D. Reilly (eds.), Encyclopedia of Computer Science, 3rd edition (New York: Van Nostrand Reinhold): 185-189.

Description: A survey article covering the following topics: Definition of Cognitive Science', History of Cognitive Science, Cognition and Computation, Varieties of Cognitive Science, Cognitive Science Research, and Future of Cognitive Science. (Revised as Rapaport 2000a, below.)

79. Rapaport, William J. (1993b), "Because Mere Calculating Isn't Thinking: Comments on Hauser's Why Isn't My Pocket Calculator a Thinking Thing?'", Minds and Machines 3: 11-20.

Abstract: I suggest that on a strong view of thinking, mere calculating is not thinking (and pocket calculators don't think), but on a weak, but unexciting, sense of thinking, pocket calculators do think. I close with some observations on the implications of this conclusion.

80. Ehrlich, Karen, & Rapaport, William J. (1993), "Vocabulary Expansion through Natural-Language Context", Proceedings of the 8th Annual University at Buffalo Graduate Conference on Computer Science (Buffalo: SUNY Buffalo Department of Computer Science): 78-84.

Abstract: We are developing a computational theory of cognitive agents' abilities to expand their vocabulary from natural-language contexts. The meaning of a word for an agent is its relation to the meanings of other words in a semantic network representing the agent's knowledge. Since such knowledge is idiosyncratic, we are researching how the agent can abstract dictionary-like definitions from its individual experiences with a word and revise hypothesized definitions in the light of successive encounters with a word. The theory is being tested by implementing it in a knowledge-representation and reasoning system with facilities both for parsing and generating fragments of English and for reasoning and belief revision.

81. Shapiro, Stuart C., & Rapaport, William J. (1995), "An Introduction to a Computational Reader of Narratives", in Judith Felson Duchan, Gail A. Bruder, & Lynne E. Hewitt (eds.), Deixis in Narrative: A Cognitive Science Perspective (Hillsdale, NJ: Lawrence Erlbaum Associates): 79-105.

Abstract: Revised and expanded version of the first half of Rapaport 1991a, above. We describe the SNePS knowledge-representation and reasoning system. We look at how SNePS is used for cognitive modeling and natural language competence. SNePS has proven particularly useful in our investigations of narrative understanding.

82. Rapaport, William J., & Shapiro, Stuart C. (1995), "Cognition and Fiction", in Judith Felson Duchan, Gail A. Bruder, & Lynne E. Hewitt (eds.), Deixis in Narrative: A Cognitive Science Perspective (Hillsdale, NJ: Lawrence Erlbaum Associates): 107-128.

Abstract: Revised and expanded version of the second half of Rapaport 1991a, above. We discuss issues in the representation of fictional entities and the representation of propositions from fiction, using the SNePS propositional knowledge-representation and reasoning system. We briefly survey four philosophical ontological theories of fiction and sketch an epistemological theory of fiction using a story operator and rules for allowing propositions to "migrate" into and out of story "spaces". An implementation of the theory in SNePS is presented.

83. Galbraith, Mary, & Rapaport, William J. (guest editors) (1995), Where Does I Come From? Special Issue on Subjectivity and the Debate over Computational Cognitive Science, Minds and Machines, Vol. 5, No. 4, pp. 513-620.

84. Rapaport, William J. (1995), "Understanding Understanding: Syntactic Semantics and Computational Cognition", in James E. Tomberlin (ed.), AI, Connectionism, and Philosophical Psychology, Philosophical Perspectives Vol. 9 (Atascadero, CA: Ridgeview): 49-88.

Abstract: John Searle once said: "The Chinese room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." I say: "Yes". Stuart C. Shapiro has said: "Does that make any sense? Yes: Everything makes sense. The question is: What sense does it make?" This essay explores what sense it makes to say that syntax by itself is sufficient for semantics.

85. Ehrlich, Karen, & Rapaport, William J. (1995), "A Computational Theory of Vocabulary Expansion: Project Proposal", Technical Report 95-15 (Buffalo: SUNY Buffalo Department of Computer Science) and Technical Report 95-08 (Buffalo: SUNY Buffalo Center for Cognitive Science).

Abstract: This project concerns the development and implementation of a computational theory of how human readers and other natural-language-understanding systems can automatically expand their vocabulary by determining the meaning of a word from context. The word might be unknown to the reader, familiar but misunderstood, or familiar but being used in a new sense. 'Context' includes the prior and immediately surrounding text, grammatical information, and the reader's background knowledge, but no access to a dictionary or other external source of information (including a human). The fundamental thesis is that the meaning of such a word (1) can be determined from context, (2) can be revised and refined upon further encounters with the word, (3) "converges" to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and (4) eventually "settles down" to a "steady state", which, however, is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS-2.1 knowledge-representation and reasoning system, which provides a software laboratory for testing and experimenting with the theory. This research is a component of an interdisciplinary, cognitive-science project to develop a computational cognitive model of a reader of narrative text.

86. Koepsell, David R., & Rapaport, William J. (1995), "The Ontology of Cyberspace: Questions and Comments", Technical Report 95-25 (Buffalo: SUNY Buffalo Department of Computer Science) and Technical Report 95-09 (Buffalo: SUNY Buffalo Center for Cognitive Science).

Abstract: This document consists of two papers: "The Ontology of Cyberspace: Preliminary Questions", by David R. Koepsell, and "Comments on The Ontology of Cyberspace'," by William J. Rapaport. They were originally presented at the Tri-State Philosophical Association Meeting, St. Bonaventure University, 22 April 1995.

87. Jacquette, Dale, & Rapaport, William J. (1995), "Virtual Relations vs. Virtual Universals: Essay, Comments, and Reply", Technical Report 95-10 (Buffalo: SUNY Buffalo Center for Cognitive Science).

Abstract: This document consists of three papers: "Virtual Relations", by Dale Jacquette; a reply, "Virtual Universals", by William J. Rapaport; and "A Note in Reply to William J. Rapaport on Virtual Relations". They were originally presented at the Marvin Farber Conference on the Ontology and Epistemology of Relations, SUNY Buffalo, 17 September 1994.

88. Rapaport, William J. (1996, in preparation), Understanding Understanding: Semantics, Computation, and Cognition; pre-printed as Technical Report 96-26 (Buffalo: SUNY Buffalo Department of Computer Science).

Abstract: What does it mean to understand language? John Searle once said: "The Chinese Room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." Elsewhere, I have argued "that (suitable) purely syntactic symbol-manipulation of a computational natural-language-understanding system's knowledge base suffices for it to understand natural language." The fundamental thesis of the present book is that understanding is recursive: "Semantic" understanding is a correspondence between two domains; a cognitive agent understands one of those domains in terms of an antecedently understood one. But how is that other domain understood? Recursively, in terms of yet another. But, since recursion needs a base case, there must be a domain that is not understood in terms of another. So, it must be understood in terms of itself. How? Syntactically! In syntactically understood domains, some elements are understood in terms of others. In the case of language, linguistic elements are understood in terms of non-linguistic ("conceptual") yet internal elements. Put briefly, bluntly, and a bit paradoxically, semantic understanding is syntactic understanding. Thus, any cognitive agent--human or computer--capable of syntax (symbol manipulation) is capable of understanding language.

The purpose of this book is to present arguments for this position, and to investigate its implications. Subsequent chapters discuss: models and semantic theories (with critical evaluations of work by Arturo Rosenblueth and Norbert Wiener, Brian Cantwell Smith, and Marx W. Wartofsky); the nature of "syntactic semantics" (including the relevance of Antonio Damasio's cognitive neuroscientific theories); conceptual-role semantics (with critical evaluations of work by Jerry Fodor and Ernest Lepore, Gilbert Harman, David Lewis, Barry Loewer, William G. Lycan, Timothy C. Potts, and Wilfrid Sellars); the role of negotiation in interpreting communicative acts (including evaluations of theories by Jerome Bruner and Patrick Henry Winston); Hilary Putnam's and Jerry Fodor's views of methodological solipsism; implementation and its relationships with such metaphysical concepts as individuation, instantiation, exemplification, reduction, and supervenience (with a study of Jaegwon Kim's theories); John Searle's Chinese-Room Argument and its relevance to understanding Helen Keller (and vice versa); and Herbert Terrace's theory of naming as a fundamental linguistic ability unique to humans.

Throughout, reference is made to an implemented computational theory of cognition: a computerized cognitive agent implemented in the SNePS knowledge-representation and reasoning system. SNePS is: symbolic (or "classical"; as opposed to connectionist), propositional (as opposed to being a taxonomic or "inheritance" hierarchy), and fully intensional (as opposed to (partly) extensional), with several types of interrelated inference and belief-revision mechanisms, sensing and effecting mechanisms, and the ability to make, reason about, and execute plans.

89. Ehrlich, Karen, & Rapaport, William J. (1997), A Computational Theory of Vocabulary Expansion", Proceedings of the 19th Annual Conference of the Cognitive Science Society (Stanford University) (Mahwah, NJ: Lawrence Erlbaum Associates): 205-210.

Abstract: As part of an interdisciplinary project to develop a computational cognitive model of a reader of narrative text, we are developing a computational theory of how natural-language-understanding systems can automatically expand their vocabulary by determining from context the meaning of words that are unknown, misunderstood, or used in a new sense. Context' includes surrounding text, grammatical information, and background knowledge, but no external sources. Our thesis is that the meaning of such a word can be determined from context, can be revised upon further encounters with the word, "converges" to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and eventually "settles down" to a "steady state" that is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS knowledge-representation and reasoning system. (The online document is a slightly modified version (containing the algorithms) of that which appears in the Proceedings.)

90. Rapaport, William J.; Shapiro, Stuart C.; & Wiebe, Janyce M. (1997), "Quasi-Indexicals and Knowledge Reports", Cognitive Science 21: 63-107.

Abstract: We present a computational analysis of de re, de dicto, and de se belief and knowledge reports. Our analysis solves a problem first observed by Hector-Neri Castañeda, namely, that the simple rule

(A knows that P) implies P'

apparently does not hold if P contains a quasi-indexical. We present a single rule, in the context of a knowledge-representation and reasoning system, that holds for all P, including those containing quasi-indexicals. In so doing, we explore the difference between reasoning in a public communication language and in a knowledge-representation language, we demonstrate the importance of representing proper names explicitly, and we provide support for the necessity of considering sentences in the context of extended discourse (for example, written narrative) in order to fully capture certain features of their semantics.

91. Rapaport, William J. (1997), Review of "Willard van Orman Quine" homepage, in American Philosophical Association Newsletter on Philosophy and Computers Vol. 97, No. 1 (Fall 1997): 40-41.

92. Orilia, Francesco, & Rapaport, William J. (eds.) (1998a), Thought, Language, and Ontology: Essays in Memory of Hector-Neri Castañeda, Philosophical Studies Series (Dordrecht, The Netherlands: Kluwer Academic Publishers).

Abstract: The late Hector-Neri Castañeda, the Mahlon Powell Professor of Philosophy at Indiana University, and founding editor of Noûs, has deeply influenced current analytic philosophy with diverse contributions, including guise theory, the theory of indicators and quasi-indicators, and the proposition/practition theory. This volume collects 15 papers--for the most part previously unpublished--in ontology, philosophy of language, cognitive science, and related areas by ex-students of Professor Castañeda, most of whom are now well-known researchers or even distinguished scholars. The authors share the conviction that Castañeda's work must continue to be explored and that his philosophical methodology must continue to be applied in an effort to further illuminate all the issues that he so deeply investigated. The topics covered by the contributions include intensional contexts, possible worlds, quasi-indicators, guise theory, property theory, Russell's substitutional theory of propositions, event theory, the adverbial theory of mental attitudes, existentialist ontology, and Plato's, Leibniz's, Kant's, and Peirce's ontologies. An introduction by the editors relates all these themes to Castañeda's philosophical interests and methodology.

93. Orilia, Francesco, & Rapaport, William J. (1998b), "Thought, Language, and Ontology: An Introduction", in Francesco Orilia & William J. Rapaport, (eds.), Thought, Language, and Ontology: Essays in Memory of Hector-Neri Castañeda (Dordrecht: Kluwer Academic Publishers): ix-xxi.

94. Rapaport, William J. (1998a), "Prolegomena to a Study of Hector-Neri Castañeda's Influence on Artificial Intelligence: A Survey and Personal Reflections", in Francesco Orilia & William J. Rapaport, (eds.), Thought, Language, and Ontology: Essays in Memory of Hector-Neri Castañeda (Dordrecht: Kluwer Academic Publishers): 345-367.

Abstract: A survey of the direct and indirect influence of the philosophical theories of Hector-Neri Castañeda on AI research.

95. Rapaport, William J. (1998b), "Academic Family Tree of Hector-Neri Castañeda", in Francesco Orilia & William J. Rapaport, (eds.), Thought, Language, and Ontology: Essays in Memory of Hector-Neri Castañeda (Dordrecht: Kluwer Academic Publishers): 369-374.

Abstract: A list of Castañeda's Ph.D. students, their students (i.e., Castañeda's "grandstudents"), etc.

96. Rapaport, William J. (1998c), "How Minds Can Be Computational Systems", Journal of Experimental and Theoretical Artificial Intelligence 10: 403-419.

Abstract: The proper treatment of computationalism, as the thesis that cognition is computable, is presented and defended. Some arguments of James H. Fetzer against computationalism are examined and found wanting, and his positive theory of minds as semiotic systems is shown to be consistent with computationalism. An objection is raised to an argument of Selmer Bringsjord against one strand of computationalism, viz., that Turing-Test-passing artifacts are persons; it is argued that, whether or not this objection holds, such artifacts will inevitably be persons.

97. Rapaport, William J. (1999), "Implementation Is Semantic Interpretation", The Monist 82: 109-130.

Abstract: What is the computational notion of "implementation"? It is not individuation, instantiation, reduction, or supervenience. It is, I suggest, semantic interpretation.

• The online version differs from the published version in being a bit longer and going into a bit more detail.
• Rapaport (in press-c) is a sequel.

98. Rapaport, William J., & Shapiro, Stuart C. (1999), "Cognition and Fiction: An Introduction", in Ashwin Ram & Kenneth Moorman (eds.), Understanding Language Understanding: Computational Models of Reading (Cambridge, MA: MIT Press): 11-25.

99. Rapaport, William J. (2000a), "Cognitive Science", in Anthony Ralston, Edwin D. Reilly, & David Hemmendinger (eds.), Encyclopedia of Computer Science, 4th edition (New York: Grove's Dictionaries): 227-233.

100. Rapaport, William J., & Ehrlich, Karen (2000b), "A Computational Theory of Vocabulary Acquisition", in Lucja M. Iwanska, & Stuart C. Shapiro (eds.), Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language (Menlo Park, CA/Cambridge, MA: AAAI Press/MIT Press): 347-375.

Abstract: As part of an interdisciplinary project to develop a computational cognitive model of a reader of narrative text, we are developing a computational theory of how natural-language-understanding systems can automatically acquire new vocabulary by determining from context the meaning of words that are unknown, misunderstood, or used in a new sense. Context' includes surrounding text, grammatical information, and background knowledge, but no external sources. Our thesis is that the meaning of such a word can be determined from context, can be revised upon further encounters with the word, "converges" to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and eventually "settles down" to a "steady state" that is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS knowledge-representation and reasoning system.

101. Rapaport, William J. (2000c), Review of Steven Pinker, How the Mind Works (New York: W. W. Norton, 1997), Minds and Machines.

102. Rapaport, William J. (2000d), "How to Pass a Turing Test: Syntactic Semantics, Natural-Language Understanding, and First-Person Cognition", Special Issue on Alan Turing and Artificial Intelligence, Journal of Logic, Language, and Information 9(4): 467-490.

• Reprinted in James H. Moor (ed.), The Turing Test: The Elusive Standard of Artificial Intelligence (Dordrecht, The Netherlands: Kluwer Academic Publishers, 2003): 161-184.

Abstract: A theory of "syntactic semantics" is advocated as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, as the study of relations between symbols and meanings, can be turned into syntax--a study of relations among symbols (including meanings)--and hence syntax can suffice for the semantical enterprise. (2) Semantics, as the process of understanding one domain modeled in terms of another, can be viewed recursively: The base case of semantic understanding--understanding a domain in terms of itself--is syntactic understanding. (3) An internal (or "narrow"), first-person point of view makes an external (or "wide"), third-person point of view otiose for purposes of understanding cognition.

103. Rapaport, William J. (2002a), "Holism, Conceptual-Role Semantics, and Syntactic Semantics", Minds and Machines 12(1): 3-59.

Abstract: This essay continues my investigation of "syntactic semantics": the theory that, pace Searle's Chinese-Room Argument, syntax does suffice for semantics (in particular, for the semantics needed for a computational cognitive theory of natural-language understanding). Here, I argue that syntactic semantics (which is internal and first-person) is what has been called a conceptual-role semantics: The meaning of any expression is the role that it plays in the complete system of expressions. Such a "narrow", conceptual-role semantics is the appropriate sort of semantics to account (from an "internal", or first-person perspective) for how a cognitive agent understands language. Some have argued for the primacy of external, or "wide", semantics, while others have argued for a two-factor analysis. But, although two factors can be specified--one internal and first-person, the other only specifiable in an external, third-person way--only the internal, first-person one is needed for understanding how someone understands. A truth-conditional semantics can still be provided, but only from a third-person perspective.

104. Rapaport, William J., & Kibby, Michael W. (2002b), "Contextual Vocabulary Acquisition: A Computational Theory and Educational Curriculum", in Nagib Callaos, Ana Breda, and Ma. Yolanda Fernandez J. (eds.), Proceedings of the 6th World Multiconference on Systemics, Cybernetics and Informatics (SCI 2002; Orlando, FL) (Orlando: International Institute of Informatics and Systemics), Vol. II: Concepts and Applications of Systemics, Cybernetics, and Informatics I, pp. 261-266.

Abstract: We discuss a research project that develops and applies algorithms for computational contextual vocabulary acquisition (CVA): learning the meaning of unknown words from context. We try to unify a disparate literature on the topic of CVA from psychology, first- and second-language acquisition, and reading science, in order to help develop these algorithms: We use the knowledge gained from the computational CVA system to build an educational curriculum for enhancing students' abilities to use CVA strategies in their reading of science texts at the middle-school and college undergraduate levels. The knowledge gained from case studies of students using our CVA techniques feeds back into further development of our computational theory.

105. Rapaport, William J. (2003), "What Did You Mean by That? Misunderstanding, Negotiation, and Syntactic Semantics", Minds and Machines 13(3): 397-427. [PDF]
Abstract: Syntactic semantics is a holistic, conceptual-role-semantic theory of how computers can think. But Fodor & Lepore have mounted a sustained attack on holistic semantic theories. However, their major problem with holism (that, if holism is true, then no two people can understand each other) can be fixed by means of negotiating meanings. Syntactic semantics and Fodor & Lepore's objections to holism are outlined; the nature of communication, miscommunication, and negotiation is discussed; Bruner's ideas about the negotiation of meaning are explored; and some observations on a problem for knowledge representation in AI raised by Winston are presented.

106. Rapaport, William J. (2003), "What Is the Context' for Contextual Vocabulary Acquisition?" (PDF), in Peter P. Slezak (ed.), Proceedings of the 4th Joint International Conference on Cognitive Science/7th Australasian Society for Cognitive Science Conference (ICCS/ASCS-2003; Sydney, Australia) (Sydney: University of New South Wales), Vol. 2, pp. 547-552.
Abstract: "Contextual" vocabulary acquisition is the active, deliberate acquisition of a meaning for a word in a text by reasoning from textual clues and prior knowledge, including language knowledge and hypotheses developed from prior encounters with the word, but without external sources of help such as dictionaries or people. But what is "context"? Is it just the surrounding text? Does it include the reader's background knowledge? I argue that the appropriate context for contextual vocabulary acquisition is the reader's "internalization" of the text "integrated" into the reader's "prior" knowledge via belief revision.

107. Rapaport, William J. (2005), "In Defense of Contextual Vocabulary Acquisition: How to Do Things with Words in Context", in Anind Dey, Boicho Kokinov, David Leake, & Roy Turner (eds.), Proceedings of the 5th International and Interdisciplinary Conference on Modeling and Using Context (Context-05) (Berlin: Springer-Verlag Lecture Notes in Artificial Intelligence 3554): 396-409.

Abstract: "Context" is notoriously vague, and its uses multifarious. Researchers in "contextual vocabulary acquisition" differ over the kinds of context involved in vocabulary learning, and the methods and benefits thereof. This talk presents a computational theory of contextual vocabulary acquisition, identifies the relevant notion of context, exhibits the assumptions behind some classic objections [due to Beck, McKeown, & McCaslin 1983 and to Schatz & Baldwin 1986], and defends our theory against these objections.

108. Rapaport, William J. (2005), "Castañeda, Hector-Neri", in John R. Shook (ed.), The Dictionary of Modern American Philosophers, 1860-1960 (Bristol, UK: Thoemmes Press): 452-457.
Abstract: A medium-sized philosophical biography.

109. Rapaport, William J. (2005), Review of Shieber's The Turing Test: Verbal Behavior as the Hallmark of Intelligence, in Computational Linguistics 31(3): 407-412.

110. Rapaport, William J. (2005), "Implementation Is Semantic Interpretation: Further Thoughts", Special Issue on Theoretical Cognitive Science, Journal of Experimental and Theoretical Artificial Intelligence 17(4; December): 385-417. [PDF]
Abstract: A sequel to Rapaport 1999. This essay explores the implications of the thesis that implementation is semantic interpretation. Implementation is (at least) a ternary relation: I is an implementation of an "Abstraction" A in some medium M. Examples are presented from the arts, from language, from computer science, and from cognitive science, where both brains and computers can be understood as implementing a "mind Abstraction". Implementations have side effects due to the implementing medium; these can account for several puzzles surrounding qualia. Finally, a benign argument for panpsychism is developed.

111. Rapaport, William J. (2005), "Philosophy of Computer Science: An Introductory Course", Teaching Philosophy 28(4): 319-341.

Abstract: There are many branches of philosophy called "the philosophy of X", where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments.

112. Rapaport, William J. (2006), "The Turing Test", in Keith Brown (ed.), Encyclopedia of Language and Linguistics, 2nd Edition, Vol. 13, pp. 151-159. (Oxford: Elsevier).
Abstract: This article describes the Turing Test for determining whether a computer can think. It begins with a description of an "imitation game" for discriminating between a man and a woman, discusses variations of the Test, standards for passing the Test, and experiments with real Turing-like tests (including Eliza and the Loebner competition). It then considers what a computer must be able to do in order to pass a Turing Test, including whether written linguistic behavior is a reasonable replacement for "cognition", what counts as understanding natural language, the role of world knowledge in understanding natural language, and the philosophical implications of passing a Turing Test, including whether passing is a sufficient demonstration of cognition, briefly discussing two counterexamples: a table-lookup program and the Chinese Room Argument.

113. Rapaport, William J. (2006), Review of John Preston & Mark Bishop (eds.), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, in Australasian Journal of Philosophy 84(1) (March): 129-133.

114. Rapaport, William J. (2006), "How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room", Minds and Machines 16(4): 381-436.

Abstract: A computer can come to understand natural language the same way Helen Keller did: by using "syntactic semantics"—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller's experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, the essay analyzes Keller's belief that learning that "everything has a name" was the key to her success, enabling her to "partition" her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace's theory of naming, which is akin to Keller's, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming.

115. Shapiro, Stuart C.; Rapaport, William J.; Kandefer, Michael; Johnson, Frances L.; & Goldfain, Albert (2007), "Metacognition in SNePS", AI Magazine 28(1) (Spring): 17-31.
Abstract: The SNePS knowledge representation, reasoning, and acting system has several features that facilitate metacognition in SNePS-based agents. The most prominent is the fact that propositions are represented in SNePS as terms rather than as sentences, so that propositions can occur as arguments of propositions and other expressions without leaving first-order logic. The SNePS acting subsystem is integrated with the SNePS reasoning subsystem in such a way that: there are acts that affect what an agent believes; there are acts that specify knowledge-contingent acts and lack-of-knowledge acts; there are policies that serve as "daemons", triggering acts when certain propositions are believed or wondered about. The GLAIR agent architecture supports metacognition by specifying a location for the source of self-awareness, and of a sense of situatedness in the world. Several SNePS-based agents have taken advantage of these facilities to engage in self-awareness and metacognition.

116. Rapaport, William J.; & Kibby, Michael W. (2007), "Contextual Vocabulary Acquisition as Computational Philosophy and as Philosophical Computation", Journal of Experimental and Theoretical Artificial Intelligence: 19(1) (March): 1-17.
Abstract: Contextual vocabulary acquisition (CVA) is the active, deliberate acquisition of a meaning for an unknown word in a text by reasoning from textual clues, prior knowledge, and hypotheses developed from prior encounters with the word, but without external sources of help such as dictionaries or people. Published strategies for doing CVA vaguely and unhelpfully tell the reader to "guess". AI algorithms for CVA can fill in the details that replace "guessing" by "computing"; these details can then be converted to a curriculum that can be taught to students to improve their reading comprehension. Such algorithms also suggest a way out of the Chinese Room and show how holistic semantics can withstand certain objections.

• HTML

117. Rapaport, William J. (2007), "Searle on Brains as Computers", American Philosophical Association Newsletter on Philosophy and Computers 6(2) (Spring): 4-9.
Abstract: Is the brain a digital computer? Searle says that this is meaningless; I say that it is an empirical question. Is the mind a computer program? Searle says no; I say: properly understood, yes. Can the operations of the brain be simulated on a digital computer? Searle says: trivially yes; I say yes, but that it is not trivial.

118. Kibby, Michael W.; Rapaport, William J.; Wieland, Karen M.; & Dechert, Deborah A. (2008), "Play Lesson 4: CSI: Contextual Semantic Investigation" in Lawrence Baines (ed.), A Teacher's Guide to Multisensory Learning: Improving Literacy by Engaging the Senses (Alexandria, VA: Association for Supervision and Curriculum Development): 163-173.

119. Rapaport, William J. (2008–present), Contributions to AskPhilosophers.org

120. Rapaport, William J.; & Kibby, Michael W. (2010, unpublished), "Contextual Vocabulary Acquisition: From Algorithm to Curriculum".
Abstract: Deliberate contextual vocabulary acquisition (CVA) is a reader's ability to figure out a (not "the") meaning for (not "of") an unknown word from its "context", without external sources of help such as dictionaries or people. The appropriate context for such CVA is the "belief-revised integration" of the reader's prior knowledge with the reader's "internalization" of the text. We discuss unwarranted assumptions behind some classic objections to CVA, and present and defend a computational theory of CVA that we have adapted to a new classroom curriculum designed to help students use CVA to improve their reading comprehension.

• Revision published as Rapaport & Kibby 2014

121. Rapaport, William J. (2011), "Yes, She Was! Reply to Ford's ‘Helen Keller Was Never in a Chinese Room’", Minds and Machines 21(1) (Spring): 3–17.

122. Rapaport, William J. (2011), "A Triage Theory of Grading: The Good, the Bad, and the Middling", Teaching Philosophy 34(4) (December): 347–372.
Abstract: This essay presents and defends a triage theory of grading: An item to be graded should get full credit if and only if it is clearly or substantially correct, minimal credit if and only if it is clearly or substantially incorrect, and partial credit if and only if it is neither of the above; no other (intermediate) grades should be given. Details on how to implement this are provided, and further issues in the philosophy of grading (reasons for and against grading, grading on a curve, and the subjectivity of grading) are discussed.

123. Rapaport, William J. (2012), "Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing", International Journal of Signs and Semiotic Systems 2(1) (January-June): 32–71.

Abstract: In this reply to James H. Fetzer's "Minds and Machines: Limits to Simulations of Thought and Action", I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and computers are semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer's arguments to the contrary.

124. Rapaport, William J. (2012), "Can't We Just Talk? Commentary on Arel's ‘The Threat of a Reward-Driven Adversarial Artificial General Intelligence’", in Amnon H. Eden, James H. Moor, Johnny H. Søraker, & Eric Steinhart (eds.), Singularity Hypotheses: A Scientific and Philosophical Assessment (Berlin: Springer): 59–60.

Summary: Arel argues that reward-driven AGIs "will inevitably pose a danger to humanity". I question the inevitability on the grounds that the AGI's ability to reason and use language will allow us to collaborate and negotiate with it, as we do with other humans.

125. Rapaport, William J. (2013), "Meinongian Semantics and Artificial Intelligence", in special issue on "Meinong Strikes Again: Return to Impossible Objects 100 Years Later", guest edited by Laura Mari & Michele Paolini Paoletti, Humana.Mente: Journal of Philosophical Studies 25 (December): 25–52.

Abstract: This essay describes computational semantic networks for a philosophical audience and surveys several approaches to semantic-network semantics. In particular, propositional semantic networks (exemplified by SNePS) are discussed; it is argued that only a fully intensional, Meinongian semantics is appropriate for them; and several Meinongian systems are presented. (This essay was originally written a long time ago, in March 1985. In the intervening decade, much progress has been made that is not reflected in the essay. I have, however, updated some of the references, and the promissory notes with respect to an intensional semantics for SNePS have since been cashed, in part, in Shapiro & Rapaport 1987, 1991.) [A shorter version appeared as Rapaport 1985a.]

126. Rapaport, William J.; & Kibby, Michael W. (2014), "Contextual Vocabulary Acquisition: From Algorithm to Curriculum", in Adriano Palma (ed.), Castañeda and His Guises: Essays on the Work of Hector-Neri Castañeda (Berlin: Walter de Gruyter): 107–150.

Abstract: Deliberate contextual vocabulary acquisition (CVA) is a reader's ability to figure out a meaning for an unknown word from its "context" without external sources of help. The appropriate context for such CVA is the "belief-revised integration" of the reader's prior knowledge with the reader's "internalization" of the text. We present and defend a computational theory of CVA that we have adapted to a new classroom curriculum designed to help students use CVA to improve their readig comprehension.

• Revision of Rapaport & Kibby 2010, with more about Castañeda and less about CVA.

127. Rapaport, William J. (2017), "What Is Computer Science?", American Philosophical Association Newsletter on Philosophy and Computers 16(2) (Spring): 2–22.

Abstract: A survey of various proposed definitions of ‘computer science’, arguing that it is a "portmanteau" scientific study of a family of topics surrounding both theoretical and practical computing. Its single most central question is: What can be computed (and how)? Four other questions follow logically from that central one: What can be computed efficiently, and how? What can be computed practically, and how? What can be computed physically, and how? What should be computed, and how?

128. Rapaport, William J. (2017), "Semantics as Syntax", American Philosophical Association Newsletter on Philosophy and Computers 17 (1) (Fall): 2–11.

Abstract: Let S, T, be non-empty sets. The syntax of S (or T) is the set of properties of, and relations among, the members of S (or T). The ontology of T (or S) is its syntax. The semantic interpretation of S by T is a set of relations between S and T. Semantics is the study of such relations between S and T. Let U = S ∪ T. Then the syntax of U provides the semantics of S in terms of T. Hence, semantics is syntax.

129. Rapaport, William J. (2017), "On the Relation of Computing to the World", in Thomas M. Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics (Cham, Switzerland: Springer): 29–64.

Abstract: I survey a common theme that pervades the philosophy of computer science (and philosophy more generally): the relation of computing to the world. Are algorithms merely certain procedures entirely characterizable in an "indigenous", "internal', "intrinsic", "local", "narrow", "syntactic" (more generally: "intra-system"), purely-Turing-machine language? Or must algorithms interact with the real world, having a purpose that is expressible only in a language with an "external", "extrinsic", "global", "wide", "inherited" (more generally: "extra-" or "inter-"system) semantics?

130. Rapaport, William J. (2018), "Syntactic Semantics and the Proper Treatment of Computationalism" in Marcel Danesi (ed.), Empirical Research on Semiotics and Visual Rhetoric (Hershey, PA: IGI Global): 128–176 (references, pp. 273–307).

• Revision of Rapaport 2012.

Abstract: Computationalism should not be the view that (human) cognition is computation; it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. If semiotic systems are systems that interpret signs, then both humans and computers are semiotic systems. Finally, minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers.

131. Rapaport, William J. (2018), "What Is a Computer? A Survey", Minds and Machines 28(3): 385--426.

Abstract: A critical survey of some attempts to define 'computer', beginning with some informal ones (from reference books, and definitions due to H. Simon, A.L. Samuel, and M. Davis), then critically evaluating those of three philosophers (J.R. Searle, P.J. Hayes, and G. Piccinini), and concluding with an examination of whether the brain and the universe are computers.

132. Rapaport, William J. (2018), "Comments on Bringsjord's 'Logicist Remarks'", American Philosophical Association Newsletter on Philosophy and Computers 18(1) (Fall): 32–34.

Abstract: A reply to Bringsjord, Selmer (2018), "Logicist Remarks on Rapaport on Philosophy of Computer Science+", American Philosophical Association Newsletter on Philosophy and Computers 18(1) (Fall): 28–31.

133. Hill, Robin K.; & Rapaport, William J. (2018), "Exploring the Territory: The Logicist Way and Other Paths into the Philosophy of Computer Science", American Philosophical Association Newsletter on Philosophy and Computers, 18(1): 34–37.

From the first paragraph: The scholarly work on the philosophy of computer science that most nearly achieves comprehensive coverage is the "Philosophy of Computer Science" textbook, manifest as an ever-growing resource online, by William J. Rapaport, winner of both the Covey Award and the Barwise Prize in 2015. His former Ph.D. student, Robin K. Hill, interviews him herein on that and related subjects.

134. Rapaport, William J. (2019), "Computers Are Syntax All the Way Down: Reply to Bozşahin", Minds and Machines 29(2) (Summer): 227–237.

Abstract: A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.

135. Rapaport, William J. (2020), "Syntax, Semantics, and Computer Programs: Comments on Turner's Computational Artifacts", Philosophy and Technology 33: 309--321

Abstract: Turner argues that computer programs must have purposes, that implementation is not a kind of semantics, and that computers might need to understand what they do. I respectfully disagree: Computer programs need not have purposes, implementation is a kind of semantic interpretation, and neither human computers nor computing machines need to understand what they do.

136. Rapaport, William J. (2020) "What Is Artificial Intelligence?", Journal of Artificial General Intelligence 11(2): 52–56.

Abstract: Wang (2019) claims to define AI in the sense of delimiting its research area. But he offers a definition only of 'intelligence' (not of AI). And it is only a theory of what intelligence is (artificial or otherwise). I offer and defend a definition of AI as computational cognition.

137. Rapaport, William J. (forthcoming), Philosophy of Computer Science: An Introduction to the Issues and the Literature (Wiley-Blackwell).

138. Rapaport, William J. (forthcoming), "A Role for Qualia", Journal of Artificial Intelligence and Consciousness

Abstract: If qualia are mental, and if the mental is functional, then so are qualia. But, arguably, qualia are not functional. A resolution of this is offered based on a formal similarity between qualia and numbers. Just as certain sets "play the role of" the number 3 in Peano's axioms, so a certain physical implementation of a color plays the role of, say, red in a (computational) cognitive agent's "cognitive economy".

139. Rapaport, William J. (in progress), "Yes, AI Can Match Human Intelligence" [PDF]

Abstract: This is a draft of the "Yes" side of a proposed debate book, Will AI Match (or Even Exceed) Human Intelligence? (Routledge). The "No" position will be taken by Selmer Bringsjord, and will be followed by rejoinders on each side.

AI should be considered as the branch of computer science that investigates whether, and to what extent, cognition is computable. Computability is a logical or mathematical notion. So, the only way to prove that something—including (some aspect of) cognition—is not computable is via a logical or mathematical argument. Because no such argument has met with general acceptance (in the way that other proofs of non-computability, such as the Halting Problem, have been generally accepted), there is no logical reason to think that AI won't eventually match human intelligence. Along the way, I discuss the Turing Test as a measure of AI's success at showing the computability of various aspects of cognition, and I consider the potential roadblocks set by consciousness, qualia, and mathematical intuition.