More from philosophy.stachexchange.com. – John Forkosh commented on my question, Does it make sense to define a computer as a symbol-manipulating device?
My response to his comment was:
Thanks John. When you say:
1. SYMBOLS. “…for concreteness, let’s please do away with this unnecessarily vague “voltage level” terminology, which you’ve used here and in preceding comments above. See, e.g., https://en.wikipedia.org/wiki/Bit#Physical_representation for the correspondence between bits and voltage levels. We’re talking about Searle’s “symbols” (sequences of bits), regardless of their physical representation, which simply happens to be voltage levels in electronic digital computers.”
I disagree about bits. To talk of a sequence of bits is to use a computational abstraction. I see the physical representation itself as the key to understanding the semantics of the machine (in Searle’s sense of semantics – i.e., intentionality). You could take this further and say that computation presupposes an extrinsic semantics (and that the concept of computation needs to be abandoned before trying to understand intrinsic semantics).
It seems important to take what john Searle does as the CPU in the Chinese room, and then say, OK what is the exact equivalent in an actual electronic digital computer? Not in an abstract machine, not in a Turing machine, but in the actual piece of hardware on a desk.
In the Chinese room, cards inscribed with Chinese ideograms drop through the slot in the door. Unknown to Searle these are sensible Chinese questions (he knows no Chinese). He perceives the shapes inscribed on the cards, and manipulates the cards on the basis of the shapes (presumably the rule book – the program – contains examples of the shapes, but Searle also says the rules describe the shapes). Searle reacts to the shape. But unknown to Searle, a meaning has been assigned to the shape by people outside the room. Searle has no access to the meaning, but only to the shape.
The way external people assign a meaning to a shape is to first perceive the shape then go through a mental process of assigning meaning (learning the meaning, and there’s more than one way to do this). This learning can be regarded as creating instances of a 2-term relation. One term is the shape, the other the meaning.
So, what’s the exact equivalent in the electronic digital computer, if the Chinese room accurately reflects the essentials of the computer? The CPU receives clocked voltage levels. Unknown to the CPU, the clocked voltage levels have been assigned meanings by people outside the computer.
Well, of course this is ridiculous. No such meanings have been assigned, nor could they ever be. External people can perceive shapes and assign meanings to them (thus creating instances of the 2-term relation), but humans are biologically incapable of perceiving clocked voltage levels, so cannot assign meanings to them. Cannot create instances of the 2-term relation, the first term of which would be the clocked voltage level.
Hence, the Chinese room does not – semantically speaking – accurately reflect what happens with computers. Semantics is the whole point of the CRA. Searle has failed to properly understand computer processing from the semantic perspective.
This, above, is just a starting point for a detailed comparative examination of the Chinese room verses electronic computers. ( I think various other things are wrong with the Chinese room, too.)
2. MEANING. You say “As for Searle and his Chinese Room conclusions, you’d need to compare and contrast “meaning” with respect to computers, versus “meaning” with respect to consciousness. But only the former is well-enough-defined for any rigorous comparison.“
The history of AI has also been a history of re-defining mental terms to make it seem as though computers have mental properties (when they don’t, at least not when running the proffered programs). Minsky was one of the masters of the fine art of academic re-definition in order to get students and funding. In his incredibly influential early book, Semantic Information Processing (he and his graduate students were contributors) the programs he presented have zero semantic content. He even (with wonderful spin) indicates this: “…one cannot help being astonished at how far they [the programs in the book] did get with their feeble semantic endowment.” They actually had zero semantic endowment.
I agree that a severe problem is that the mind is not understood, the higher-level functions of the brain are not understood. Maybe re-defining mental terms using Computer Science terminology (so the re-defined concepts are capable of being realized in a computer) seems the only option. But to rebut the CRA (which is the goal of really carefully examining the Chinese room) the attempted rebuttal needs to use the concepts Searle uses in his arguments and in his descriptions of the room. Appeals to a definition of meaning that Searle does not use is appropriate. The least needed is a convincing translation of his meaning into a Computer Science one (which would be a reduction of meaning in Searle’s sense to meaning in the Computer Science sense).
3. DEFINITION OF COMPUTER. You say, “Whether or not his argument’s conclusive is maybe debatable, but his definition of “computer” is entirely adequate.”
Searle defines the computer as a universal Turing machine. However, a Turing machine processes things that have extrinsic meanings (0,1, the various shapes reacted to by the universal machine as described in Turing’s 1936 paper), but electronic computers don’t. The things electronic computers process don’t have any semantics. That’s the key point. Turing machines are said to be purely computational entities. Computation presupposes an extrinsic semantics. Electronic computers, in the sense of processing things that lack an extrinsic semantics, are hence not computational. If this (really radical) view is adopted, then it probably has relevance to the validity or soundness of the Chinese room argument.
4. SYMBOL MANIPULATION. You say: “Any further argument would have to discuss the ultimate capabilities of “symbol manipulation” — just how far can that take you? And that’s indeed somewhat of an open question.”
I claim that electronic computers don’t process symbols in Searle’s sense of “symbol”. If this is right, then the idea of symbol manipulation (in Searle’s sense of symbol manipulation, which is the Turing machine sense) is inadequate for fully understanding what computers do and could do.
From the start of the Computer Age, electronic computers have been understood using the concepts of computation (hence the name of the device). But what if these concepts are not adequate when it comes to trying to understand how an electronic computer (so named) could think? What if thinking is fundamentally non-computational? And what if, when the electronic computer is understood with different concepts, it becomes clear how a computer could perform the needed non-computational operations of intelligence?
In asking the question “Does it make sense to define a computer as a symbol-manipulating device?”, I was asking whether Searle was trying to force down our throats the (false) idea that electronic computers processes objects that have an extrinsic semantics. These objects being symbols is Searle’s sense of “symbol”. That computers manipulate symbols and only symbols (in Searle’s sense of “symbol”) is a premiss of various versions of the CRA. By defining computers as symbol-manipulating devices, he seems to be trying to prevent any discussion of the question: Well, do computers process symbols (in Searle’s sense of “symbol”)? If they don’t, then a CRA premiss is false and the argument is unsound.