The Chinese room is supposed to be a computer trying to be a brain (that understands Chinese). Searle argues that because of the fundamental nature of the computer, as presented by his Chinese room thought experiment, no computer could think.
There are two core conclusions in this argument, and one can get to the second from the first as follows.
- no computation could think,
- computers only compute,
- therefore no computer could think.
The error of present interest prima facie effects or might effect only the second conclusion. I want to discuss whether the Chinese room does accurately present (relevant aspects of) the electronic digital computer. My claim is:
The Chinese room fails to reflect the fundamentals of today’s computers.
And further, that this failure seriously impacts the Chinese room argument, though perhaps not terminally. First, I’d like to describe the Chinese room and its operation, then today’s computers, then try to explain why the first does not equal the essence of the second.
The Chinese room
A man in a room receives through a slot in the door cards inscribed with Chinese ideograms . He knows no Chinese. He doesn’t even know that the marks on the cards are linguistic symbols. He has a rule book written in English, which language he understands. The rules tell him to manipulate the cards based on the shapes inscribed on them (the rule book might contain examples of the shapes, in which case the rules also contain Chinese ideograms, or the rules might describe the shapes, in which case the rules are wholly in English) . The room also contains baskets of spare symbols, and the man can get cards inscribed with the mandated shapes from these baskets. The thought experiment assumes that he will always find the cards the rules require.
The rules also instruct he man to push cards inscribed with certain shapes out through the slot. Unknown to the man, the cards entering the room are sensible Chinese questions, and the cards he pushes out through the slot are sensible Chinese answers. To an outsider, the room appears to be intelligent, to understand Chinese, and to easily pass the Turing test. But the man knows no Chinese and doesn’t understand anything about the shapes inscribed on the cards, and there is nothing else in the room that conceivably could. In fact, inside the room (inside the computer) there is no understanding of the questions or answers. This is for the fundamental and simple reason that symbols (shapes, mere marks) do not in themselves carry or indicate their meanings. All computers process is symbols, tokenised shapes, therefore computes will never think. This dire conclusion (for AI) goes for any type of symbol that computers process including the sort received from sensory apparatus.
The man is the computer CPU, the slot in the door the input/output port, the cards are tokenised symbols that are processed by the computer, and the rule book is the program. Searle also in his descriptions after 1980 of the Chinese room says that the spare symbols are in baskets and that these baskets are databases. Databases can be ignored at present.
Today’s electronic computers
Today’s computer have input/output ports, programs, CPUs and databases. Data traveling through the wires inside the computer (wires typically called buses) has the form of binary clocked voltages. Data at rest inside a computer or storage attachment usually has the form of binary magnetic orientations of small magnetic domains of an iron oxide surface, or binary semiconductor switch states.
There are transitions between the two binary voltage levels of such data in motion. For example when a clocked voltage changes from the low value to the high value, the voltage doesn’t immediately change. There is a slight time delay and the voltage in a wire progressively moves from the low value to the high value during this time period. But the hardware is designed to ignore intermediate voltages. In other words, the machine is designed so that what has causal effect is only a high voltage or a low voltage, and intermediate voltages have no causal effect. A similar idea applies to the high or low voltage itself. These are not exactly constant and can vary slightly, and the hardware is designed to treat any voltage between, say, 0.3 and 0.6 volts as the low binary value, and and voltages between say, 3.5 and 4.0 volts as the higher of the two binary values. The binary values are given names, typically “zero”, and “one”, and “0” and “1”.
Sequences of tokenised binary values are received by the computer from attachments such as keyboards, and are sent to attachments such as screens and printers. These sequences are called data. CPUs manipulate contiguous groups of binary values, data, usually in multiple of 8 binary values at a time, which groups can be stored inside the CPU in small semiconductor memory structures called registers, according to hardware operations called machine code operations. Machine code operations are caused by other groups of binary values stored in a special place called the program stack, and the machine, by its design, reacts to these “machine codes” in the program stack and these reactions manipulate the data, for example, by moving it from one register to another.
Chinese room problem
To be clear, to the CRA, symbols are tokenised interpretable shapes. This is what the Chinese room is virtually all about – Chinese ideograms, tokenised shapes that have interpretations, or meanings. The crucial point is the interpretation. This is not embodied inside or carried by the token (if it were, we wouldn’t have to learn foreign languages – or our native tongue).
Searle is clear about this: the shape itself does not indicate its meaning. Hence, a system that gets only instances of the shapes will never understand what the shapes mean – because the only things they ever get do not include meanings. And computers get only the shapes. Ergo, computers will never think (having meanings being necessary to thought).
So if computers process only symbols, and Searle holds that they do process only symbols, they AI’s quest for the thinking computer is futile and doomed to abject and total failure.
Do computers process symbols?
But do computers process symbols? If the answer is yes, then according to the second of Searle’s CRA conclusions, computers will never think. But if the answer is no, Searle’s second conclusion might still apply. This is because whatever arrives at the computer to be processed might still not carry meanings with it. If meanings are not inside those things that arrive to be processed (do not come with them by being internal) and are not piggybacking on the things received (do not come with them by being externally attached), then whatever the nature of the things processed , symbolic or not, the computer will never think. That’s the CRA extended to non-symbols.
This is what I think: computers don’t process symbols. The clocked voltages, magnetic domains, semiconductor switch states, these things are not symbols. And secondly, these things do not have meanings.
So it’s not a matter of having meanings but the meanings simply not arriving along with the thing to be processed. The things to be processed have no interpretations. They’ve never gotten any. The values of the properties of the things to be processed have never had meanings assigned to them (or gotten them in any other way). The things computers process are, literally, meaningless. I’ll try and explain this.
The names of a computer’s internal binary values
“Symbol” means a shape that has an interpretation, or meaning. It is the shape that has the meaning, and a shape is a value of a certain property, namely the property of shape. Hence CAT has a certain shape, and the respective property is the property of shape. In fact shapes composed of letters of an alphabet can be different shapes but be the same letters. The symbol in this case is somewhat abstract and in fact is a set of shapes such as: CAT, cat, CAT, CAT, and so on through various type faces. This complexity is rightly ignored when talking about the Chinese room argument.
“Binary token” means a token realized in a given substrate and processed by a system where that system will react (causally respond to) only one of two possible values of a certain property of a unit of that substrate. So the idea of a binary token is relative to the causality of a given system or type of system.
The binary tokens that computers process have names: “zero”, “one”, “0”, “1”. These names are themselves symbols and have interpretations. But what these symbols name or refer is another matter. Just because a name is composed of letters whose combined shapes have an interpretation, it doesn’t follow that the shapes of what the names designate also have a meaning in the sense that symbols in a book (or in the Chinese room) do. Or that values of some other property of what the names designate have an interpretation.
Also, the names “0” and “1” as used to refer to what computers process are somewhat ambiguous, since they refer to values of different properties of at least three types of substrates: clocked voltage levels, semiconductor switch states and magnetic domains. A token of a given binary value (say the one called “0”) can be realized in voltage level, switch state or magnetic orientation. These different substrates have different respective properties.
It seems strange and probably wrong to say, for instance, that clocked voltages have shapes. A trace on an oscilloscope screen of a square wave logic pulse has a shape, but the trace is not a clocked voltage, it is a series of activated pixels or the causal result of a fluctuating electron bean hitting a phosphorescent screen in a raster pattern. However, magnetic domains would have some sort of shape, but this shape is independent of the polarization in the iron oxide.
The relation has-a-meaning
We can discuss meaning from the perspective of a 2-term relationship. The terms are the things related. On this way of thinking, a certain particular symbol such as what follows: CAT, is a token that bears a certain value of the property of shape. This token comprises certain atomic tokens of the shapes C, A and T.
The relation has-a-meaning is a relationship between the shape of the particular tokenised symbol and a meaning. A shape is a value of the property of shape, so the relation has-a-meaning is a relationship between the value of a property and a meaning. This relation can be expressed x-means-y, where x is a property value and y is a meaning. (Much more can be said about this, but for the moment, the idea of a simple 2-term relation is probably adequate.)
Do the things computers process have an interpretation?
Do the things computers process have an interpretation, or meaning? First, what does it mean for a symbol to have a meaning? For an answer to this question we can take some guidance from Searle’s Chinese room. What does it mean for a symbol in the Chinese room to have a meaning? Well, the data symbols in the room are Chinese ideograms. How does such a symbol get a meaning?
The shape of the symbol is perceived by a human and then a form of learning takes place and some brain structure results: the interpretation. When the shape is later perceived, the perception process activates this brain structure, and this is called understanding the meaning of the symbol. Thus on the one hand there is the shape, and on the other hand there is a brain structure, the interpretation of the shape. In fact on one hand there is the neural “representation” of the shape, and on the other hand there is another brain structure which we say is the interpretation of the shape. The two brain structures being connected through a process we call learning.
Well, if that’s what knowing the meaning of a symbol amounts to, then there seems to be two main questions about the things computers process:
- Do the tokenised binary values processed by computers have meanings to a human?
- Do the tokenised binary values processed by a computer have meanings to the machine?
To the human: the short answer is extremely short: No. Human sensory apparatus can’t detect the things computers process. No human sense detects clocked voltage levels, orientations of magnetic domains or semiconductor switch states. Humans can detect symbols and then once detected understand or learn the meanings of their shapes. But can’t detect what computers process, so can’t learn any meanings of values of their properties.
To the machine: Prima facie this seems a very strange question. The human-equivalent question is: do humans know the meanings of the neural pulses propagating around inside their brains? Why should understanding the meanings of neural pulses have anything to do with understanding the meanings of words in a book? Humans don’t interpret the pulses pulsing around their brains in order to know Chinese. Why should a computer have to interpret what’s pulsing along its data wires in order to understand Chinese?
Humans understand the meanigs of external items. The Chinese ideograms in the Chinese room are tokenised shapes that are external items to a human brain. Why would this process of understanding a meaning need to apply to internal items inside the human – or computer – brain? To suppose I need to understand what my neural pulses mean in order to understand Chinese seems like a category mistake.
Conclusion
Does the Chinese room reflect the essential character of the computer? No. The Chinese room processes symbols (being, as Searle explains, tokens whose shapes have meanings) but the things electronic computers process are literally meaningless.
This is not a problem but rather a clarification. Humans don’t understand Chinese by interpreting their neural pulses (which idea in any case suggests the dreaded homunculus). The idea that the Chinese room needs to understand Chinese by interpreting the things it processes is non-human-like. Since computers can’t interpret what they process (since these things have no meanings), whether a computer could think is still an open question.
associative-ai.com