The Chinese room argument (CRA) is a set of related arguments and a thought experiment first published by philosopher John Searle in 19801 and now regarded as a strong theoretical attack on the original goal of the research field of Artificial Intelligence (AI) to create a human-like general intelligence in a machine.
Overview of ideas and constituent arguments
Computation. In essence, computation manipulates symbols without reference to the meanings of their shapes. This type of manipulation is called syntactic, or formal, and consists in reaction only to shape.
Computer (stored program electronic digital computer). Due to the essential nature of computation, computers internally process symbols by reacting only to their shapes. Computer programs identify the shapes then perform different manipulations as reactions to the different shapes.
Manipulation. “Manipulate”, here, refers to four simple hardware operations on symbols: Identify, Create, Destroy, and Move, plus compounds, or sequences, of these. (In most computers at the machine code level there are many variations, or sub-varieties, of these four simple operations.)
Example program. A typical computer program will contain instructions of the form: If the input symbol is “A” then the output symbol is “B”.
Syntactic compared to semantic. Computers perform purely syntactic symbol manipulation because computation is purely syntactic. Syntactic manipulation is manipulation without reference to the meanings (the semantics) of the shapes of the symbols.
Inputs. All a computer gets from outside is symbols. The computer is by definition a symbol-processing device. The meanings do not come with the symbols. The machine never gets the meanings.
Turing test. Computers are forever prisoners in a universe of syntax. They might pass the Turing test of machine intelligence. The human programmer might be smart enough and write a good program, but the machines will never understand what the input and output shapes refer to, or denote.
Sensory symbols. For symbols received from sensors, computers never understand anything about the outside world, the environment. In this case, the meaning of the symbol is the external item detected by the sensor, and which causes, via the internals of the sensor, the emission from the sensor of sensory symbols.
Linguistic symbols. For linguistic symbols, computers will never understand what they refer to. A Chinese ideogram might refer to green tea. The computer can identify the shape of this Chinese character (by the program containing an example of it, or in the Chinese room, a description of it), but what the shape refers to is forever hidden from the machine, which has only the shape.
Minds/brains. Minds/brains necessarily embody semantic content. This is about something. Minds have intentionality. The shape of a symbol, in itself, intrinsically, is not about anything.
Conclusion
Conclusion. Since computers are fundamentally only syntactic, and since there is no way to wring semantics out of syntax, computers will never understand anything. They will never think. Computers will never have minds. Computers will never have human-like intelligence.
Computationalism. Turing’s computational theory of intelligence – that the mind is an executing computation (now with minor additions, called “computationalism”) – is false. AI will never achieve its original goal of a computer with a human-like general intelligence.
- John Searle (1980), “Minds, Brains, and Programs”, in Behavioral and Brain Sciences, Vol. 3, pages 417-454.