John Searle’s Chinese room passes the Turing test. People outside the room think there’s a human inside who understands Chinese. But, Searle explains, the room actually contains, in analogical form, all the essential elements of a electronic digital computer programmed to understand (according to Strong AI) written Chinese. But the monolingual English-speaking man in the room (the computer CPU) understands no Chinese. Cards inscribed with Chinese symbols fall into the room through a slot in the door. These are sensible Chinese questions. The rule book (the program) deals only with their shapes, not their meanings. The book instructs the man to find certain Chinese characters among the spares in the room then to push them out through the slot. Unknown to the man, these are sensible Chinese answers. Neither the man nor the room understands the meanings of the shapes, since all they have is the shapes and a book of rules about manipulating instances of the shapes. From here Searle goes on to argue that computers will never understand language or the world.
What seems to me like a fundamental mistake is that Searle bases his argument on comparing a computer receiving Chinese symbols with a human receiving Chinese symbols. Then from the fact that the computer doesn’t understand the meanings of the symbols, Searle argues that computers could never understand anything.
The Chinese room needs to learn Chinese
Well, humans can’t understand the meanings of the symbols either. Humans first have to learn Chinese. Why doesn’t the room try to learn Chinese? Without this, Searle’s argument is pointless. Leaning Chinese entails developing memory structure. There’s no structure in the Chinese room because there is nothing in the room to build it out of. The room’s ontology needs structural elements added to it so that it then contains atoms of structure as well as symbols (the content of structure). Then the program can instruct the man to build memory structure. Digital computers can easily build memory structure and often do. Now with structural elements, the Chinese room can try to learn Chinese. And by the way, the CRA is unsound because Searle’s premiss “… a digital computer is a syntactical machine. It manipulates symbols and does noting else” (John Searle, 2014, “What Your Computer Can’t Know”, in The New York Review of Books, October 9, 2014) is false. Also, it can be well argued that some structural elements are semantic.