Symbolic computational cognitivism is often called the ``Physical Symbol System Hypothesis'' (PSSH) or the ``Representational Theory of the Mind'' (RTM). The PSSH, due to Allen Newell and Herbert Simon (1976), is offered as a solution to the problem of ``how it is possible for mind to exist in this physical universe'' (Newell 1981: 84; cf. Pylyshyn 1985: 75): Mind exists as a physically implemented ``symbol system''. The concept of a physical symbol system is ``the most fundamental contribution ...of artifical intelligence and computer science to'' cognitive science (Newell 1981: 38). A symbol system is any effectively computable procedure, i.e., a universal machine (which, by Church's Thesis, could be a Turing machine, a recursive function, a general-purpose digital computer, etc.). A physical symbol system is a physical implementation of such a symbol system. The PSSH states that a physical system is capable of exhibiting intelligent behavior (where intelligence is defined in terms of human intelligence) if and only if it is a physical symbol system (cf. Newell 1981: 72). This is taken to be an empirical hypothesis, whose evidence comes from work in symbolic, i.e., non-connectionist, artificial intelligence (Newell 1981: 73). Newell argues that intelligent physical systems are physical symbol systems since intelligence requires representations of a wide variety of goals and states, and since such flexible representations require symbols (hence the RTM; cf. Newell 1981: 58, 62; Pylyshyn 1985: xii, 24). It is the first of these reasons--the requirement of representations--that is empirical; the second--that the representations must be symbolic--is challenged by connectionism. The converse claim, that physical symbol systems are capable of being intelligent physical systems, has been challenged by the non-computationalists of position (2b), above. One particularly strong form of the RTM is Fodor's ``language of thought'' theory (1975), which says that the mental representations are a language (sometimes called ``mentalese'') with a syntax (and, perhaps, a semantics). Fodor's theory of methodological solipsism (1980) holds that the syntax of the language of thought is all that cognitive science needs to deal with, i.e., that the input-output transducers--while important for understanding how information gets into and out of the mind--are irrelevant for understanding how the mind works.
(4) There is, perhaps, a fourth dichotomy among the symbolic computational cognitivists, between (a) those who are satisfied with symbolic algorithms whose input-output behavior is the same as human cognitive behavior and (b) those who are only satisfied with symbolic algorithms that not only are input-output equivalent to human cognitive behavior but also are equivalent in all but the details of physical implementation, i.e., equivalent in terms of subroutines and abstract data types. A particularly strong form of (4b) also requires the algorithms to be equivalent to human cognitive behavior at the level of space and time complexity (cf. Pylyshyn 1985: xvi).
According to the PSSH and the RTM, when a physical system--be it computer or human--executes a ``cognitive'' algorithm, the representations are brought to life, so to speak, and made to behave according to the rules of the symbol system; the symbol system becomes dynamic, rather than static. If cognition is representational and rule-based in this way--i.e., if (or, more conservatively, to the extent that) cognitive behavior consists of transformations of representations according to rules--then a computer that behaves according to (physical implementations of) these rules causally applied to (physical implementations of) these representations is behaving cognitively and is not merely simulating cognitive behavior (cf. Sokowlowski 1988).
Although the PSSH and the RTM offer an answer, which is satisfying to most computer scientists, to Descartes's question of how mind and body can interact (namely, mind can be implemented in body), they are not without their detractors. Of particular note are the objections of Terry Winograd, who did pioneering work in the symbolic paradigm of artificial intelligence. Winograd cites a biologist, Humberto Maturana, who straightforwardly denies the RTM: ``cognition is not based on the manipulation of mental models or representations of the world'' (Winograd 1981: 248). Instead, according to Winograd and Maturana, there are cognitive ``phenomena that for an observer can be described in terms of representation, but that can also be understood as the activity of a structure-determined system with no mechanism corresponding to a representation'' (Winograd 1981: 249). This view echoes the ``intentional stance'' theory of the philosopher Daniel Dennett (who has many more sympathies with computational cognitivism). According to Dennett, it makes sense to treat certain complex systems (e.g., chess-playing computers) as if they had beliefs and acted intentionally even though there might not be anything in their structure that corresponded in any way to beliefs or intentions (Dennett 1978).