Philosopher John Searle’s 1980 argument known as the Chinese room1 or Chinese room argument (CRA) attacks the Turing test and computationalism, the main theoretical basis of the research field of Artificial Intelligence (AI).
Possible rebuttal or partial rebuttal: Why is the Chinese room not a computer?
To me, the Chinese room argument including thought experiment is critical to resolving current theoretical problems with computationalism. What I want to initially argue is that the room grossly misrepresents the digital computer. This, because:
- The room processes symbols (tokenized shapes that have meanings), but computers do not process symbols. Symbols are merely encrustations on exposed surfaces of computer attachments put there to facilitate the human use of the machine. For example, for human use, a keyboard is a way to send symbols into the machine (a false idea). Intrinsically – to the computer itself – a keyboard is a rectangle of digital skin containing a rather coarse array of touch sensors.
- (Adopting the myth that computers process symbols because the myth makes it easier to write about computers) the only data inside the Chinese room is input-output symbols (Chinese ideograms) plus the program. Even though Searle talks about “data bases” in the room, the man has no way to relate symbols together. Computers relate symbols extremely often using for example, pointers. The room’s ontology is grossly deficient. It needs relational entities, for example, pieces of string, as well as symbols. A piece of string is not a symbol, and all pieces of string have the same intrinsic qualities.
Then I want to argue that the CRA thought experiment bizarrely contradicts the human case. Searle argues that the man in the room knows no Chinese so can’t understand the meanings of the shapes that drop through the slot in the door. Ignoring that the room conflates external and internal objects (the human equivalent is Chinese ideograms written on cards being pushed into a human’s brain through a hole in the head – no perception), ignoring this conflation, if I am going to understand Chinese ideograms, I will first need to learn Chinese. Why doesn’t the Chinese room first learn Chinese?
Searle would undoubtedly say that learning Chinese is beside the point. The Chinese room argument shows that no matter what enters the computer as input data (Chinese ideograms themselves or the output of perceptual apparatus) the computer could never learn anything. But this response is founded on the grossly deficient ontology of Searle’s room, an ontology that omits the relational object type (instantiated for example as string). It might be true that given Searle’s defective and insufficient ontology of symbol-man-program-basket, the room will never learn anything. But it does not follow that this is also true for an ontology that includes relational objects.
The Room’s ontology is defective, computers have more object types. Searle is prejudiced by the myopia of linguistics and computationalism and fails to understand the true nature of the computer. The Chinese room does not accurately represent, picture, present the fundamentals of today’s computers. The CRA’s conclusion is founded on a fundamentally defective conception of the machine, and his conclusion inherits this failure.
- John R. Searle (1980), “Minds, Brains, and Programs”, in Behavioral and Brain Sciences, 1980, Vol. 3, pages 417-457.