Chinese Room Argument

The Chinese room argument is a refutation of strong artificial intelligence. "Strong AI" is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind. "Weak AI" is defined as the view that the computer plays the same role in studying cognition as it does in any other discipline. It is a useful device for simulating and therefore studying mental processes, but the programmed computer does not automatically guarantee the presence of mental states in the computer. Weak AI is not criticized by the Chinese room argument.

The argument proceeds by the following thought experiment. Imagine a native English speaker, let's say a man, who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols that are correct answers to the questions (the output). The program enables the person in the room to pass the Turing test for understanding Chinese, but he does not understand a word of Chinese.

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese, then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.

The larger structure of the argument can be stated as a derivation from three premises.

  1. Implemented programs are by definition purely formal or syntactical. (An implemented program, as carried out by the man in the Chinese room, for example, is defined purely in terms of formal or syntactical symbol manipulations. The notion "same implemented program" specifies an equivalence class defined purely in terms of syntactical manipulations, independent of the physics of their implementation.)
  2. Minds have mental or semantic contents. (For example, in order to think or understand a language you have to have more than just the syntax, you have to associate some meaning, some thought content, with the words or signs.)
  3. Syntax is not by itself sufficient for, nor constitutive of, semantics. (The purely formal, syntactically defined symbol manipulations don't by themselves guarantee the presence of any thought content going along with them.)

Conclusion: Implemented programs are not constitutive of minds. Strong AI is false.

Why does the man in the Chinese room not understand Chinese even though he can pass the Turing test for understanding Chinese? The answer is that he has only the formal syntax of the program and not the actual mental content or semantic content that is associated with the words of a language when a speaker understands that language. You can see this by contrasting the man in the Chinese room with the same man answering questions put to him in his native English. In both cases he passes the Turing test, but from his point of view there is a big difference. He understands the English and not the Chinese. In the Chinese case he is acting as a digital computer. In the English case he is acting as a normal competent speaker of English. This shows that the Turing test fails to distinguish real mental capacities from simulations of those capacities. Simulation is not duplication, but the Turing test cannot detect the difference.

There have been a number of attempts to answer this argument, all of them, in the view of this author, unsuccessful. Perhaps the most common is the systems reply: "While the man in the Chinese room does not understand Chinese, he is not the whole system. He is but the central processing unit, a simple cog in the large mechanism that includes room, books, etc. It is the whole room, the whole system, that understands Chinese, not the man."

The answer to the systems reply is that the man has no way to get from the SYNTAX to the SEMANTICS, but neither does the whole room. The whole room also has no way of attaching any thought content or mental content to the formal symbols. You can see this by imagining that the man internalizes the whole room. He memorizes the rulebook and the data base, he does all the calculations in his head, and he works outdoors. All the same, neither the man nor any subsystem in him has any way of attaching any meaning to the formal symbols.

The Chinese room has been widely misunderstood as attempting to show a lot of things it does not show.

  1. The Chinese room does not show that "machines can't think." On the contrary, the brain is a machine and brains can think.
  2. The Chinese room does not show that "computers can't think." On the contrary, something can be a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese room shows that COMPUTATION, as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking.
  3. The Chinese room does not show that only brains can think. We know that thinking is caused by neurobiological processes in the brain, but there is no logical obstacle to building a machine that could duplicate the causal powers of the brain to produce thought processes. The point, however, is that any such machine would have to be able to duplicate the specific causal powers of the brain to produce the biological process of thinking. The mere shuffling of formal symbols is not sufficient to guarantee these causal powers, as the Chinese room shows.

See also COMPUTATIONAL THEORY OF MIND; FUNCTIONALISM; INTENTIONALITY; MENTAL REPRESENTATION

Additional links

-- John R. Searle

Further Readings

Searle, J. R. (1980). Minds, brains and programs. Behavioral and Brain Sciences, vol. 3 (together with 27 peer commentaries and author's reply).