The Turing machine
The topic is AI’s concept of the symbol. The approach is to start with what the term “symbol” was first used to denote, then to go from there. This approach leads directly to AI founder and renowned British theoretical mathematician Alan Turing.
In his 1936 paper, “On Computable numbers with an Application to the Entscheidungsproblem” (available online), Turing describes a machine now known as the Turing machine. This is regarded as having seven simple operations, which are called Scan, Print, Erase, Left, Right, Halt and Change State. It can be configured in two fundamentally different ways. When configured in the first way, it is usually called just a Turing machine but might be more clearly identified as a dedicated Turing machine. When configured the second general way, it is called “universal”. The difference between dedicated and universal can be considered at a late time: it’s not strictly relevant to Turing’s concept of the symbol.
The design of the machine is said to be abstract because one part has infinite length. Apart from that, a simple dedicated version of the machine could easily be built (one with a simple configuration, a basic sequencing, of the simple operations). The requirement of infinite length can be avoided in practice by adding further sections to the fixed-length part as required.
Turing calls his design a “computing machine”, and this internally manipulate things he calls “symbols”. He starts his explanation:
“Computing is normally done by [a human] writing certain symbols on paper…“
Thus in the quest to investigate AI’s concept of the symbol, we start with the things humans inscribe on paper when manually performing a computation.
These things are linguistic symbols such as are found in books, written on tax returns, and so on. The subset typically used in computing include such symbols as 1, 2, A, B, +, =, %, ≥, and ∑.
Now writing certain symbols on paper means inscribing them typically with a pen. In terms of substance, the symbols are ink of a certain shape, size, thickness. In other words, they are units of substance that bear certain properties.
Symbols have meanings
But not all shapes are symbols. Symbols are shapes that have meanings. The shapes denote, refer to, stand for things, and they do this by virtue of conventional assignment. Some person or community has assigned a meaning to the shape. This is a basic idea of written language.
The meaning isn’t necessarily relevant to the mechanics of computing. A mechanical or electrical adding machines doesn’t need to know the meanings of the numerals and of “+”, “-“, etc. It just needs to be designed by some human who does know. But if a machine itself is to have human-like intelligence, then the machine itself does need to know. So while meaning isn’t necessary to the mechanics of computing, it’s crucial to AI.
Considering a human computing with pen and paper, what exactly exists? There is the particular substance – the ink on the paper. Then there is the shape of the ink. It is the shape that has the meaning. So two things exist. The substance of the symbol and the meaning of its shape.
Part of Turing’s concept of symbol is the idea of a shape having a meaning. How are we to understand this having of a meaning? It isn’t like having a size or shape. A size or shape is inherent to the substance that bears it. A certain shape is a value of a property, the property of shape. Get the substance and you automatically also get its properties, such as shape.
The relationship of has-a-meaning
But to say a symbol has a meaning is different sort of “has”. The two elements, the shape and its meaning, are quite separate. They don’t travel together. You can have the symbol but this doesn’t amount to also having its meaning. If it did, we wouldn’t have needed the Rosetta Stone. We wouldn’t need to learn languages. Mere possession of the written words would enable understanding them. But we do need to learn languages. We already have the shapes. Then we need to acquire their meanings, and this takes some time.
A way to understand the idea of a shape having a meaning is that while to have a shape is to possess a property, having a meaning is a 2-term relationship. This relation can be called has-a-meaning. One term is a shape. The other and separate item is a meaning.
Having the shape alone is not enough to get to its meaning. You also need the connector, you need that in which the relation subsists, that by virtue of which the shape and the meaning are related (rather than not related). There are really three items here: the two terms related plus an element that relates them together.
If you get the shape plus the connector, that’s fine, then you can get from the shape to the meaning (by following the connector). But if all you get is the shape, then you’re out of luck. If all a machine gets is the shape, then the machine is out of luck. It will never understand the symbols it receives.
Though before going on, it’s relevant to note that the idea of the relation has-a-meaning as a 2-term relation between a shape and a meaning has problems. It will (hopefully) be improved upon shortly. For the moment it’s perfectly adequate to regard the human semantic, or meaningful, use of linguistic symbols as that of having the ability to get from shapes (such as those in text books) to their meanings.
State of mind
Turing continues:
“The behaviour of the [human] computer at any moment is determined by the symbols which he is observing, and his ‘state of mind’ at that moment.”
Here, “state of mind” is usually regarded as referring to how the human computer could react to the shapes of the observed symbols, and is typically thought to comprise a set of conditionals of the form: if the observed symbol shape is such-and-such then do so-and-so.
Such a “state of mind” would not necessarily involve accessing the meanings of the shapes, but would rather involve simply executing causal sequences of the form: if the observed shape is “X” then do so-and-so. In the human case, however, meanings would presumably be accessed when the human observes a symbol. We would expect that observing a symbol would involve activating the meaning of its shape, if known.
The relation has-a-meaning (again)
Now there’s a potentially more accurate way to understand the relation has-a-meaning. Rather than construing it as (some sort of weird) connection between (somehow) a shape and a meaning, what about the following idea. The core relation is between the inner representation of a shape (a neural structure) and a meaning (also a neural structure). On this idea, the connection is no longer mysterious but consists of neural fibers joining two neural structures.
This seems an interesting idea. The symbol the human computer manipulates (creates, deletes, identifies, moves) is external to the brain and inscribed on a piece of paper. Both the symbol and the inner neural structure that represents the symbol’s shape, have shapes. BUT, and this is what seems really interesting, only the shape of the external symbol has a meaning. No meaning has been assigned to the wildly filamentous inner structure, which in any case is probably unique to the individual. So the shapes in text book have meanings, but their neural “representations” don’t. And since a symbol is a shape with a meaning, the neural representations are not symbols.
I think most people would agree that the shapes of neural structures have not been assigned meanings in the sense that shapes of symbols in books have. Given that human brains comprise neural structures, this seems to be saying that there are no symbols stored inside human brains.
Now returning to Turing’s idea of state of mind. It seems that while a human’s state of mind (including inner neural representations of observed symbol shapes) determines the behavior of the human performing a computation with pen and paper, there are no symbols stored in that organic cellular state of mind. It’s entirely non-symbolic.
External symbols
Of course it almost goes without saying (but I specially need to say it) that such symbols as the human computer writes on paper are external to the human brain and observed via sense perception. What enters the brain is the proceeds, as we say in patent claim lingo, of the sensory apparatus, not the ink on the paper.
Summary
To review, humans when computing with pen and paper manipulate (create, delete, identify, move) objects called linguistic symbols. The shapes of these have been assigned meanings by a person or community. The term “symbol” can be used to refer either to particular such objects, called tokens, or abstractly to their shapes.
Linguistic symbols are explained by saying that their shapes are related to meanings by conventional assignment. What happens during human computing with pen and paper is that a symbol token is observed, this sensory process activates an internal representation of the shape, and this inner representation structure is neurally connected to a meaning, also a neural structure.
Meaning and denotation, or reference
On this scheme of things, the reference, or denotation, of a shape is distinguished from its meaning. The meaning is internal, the denotation is typically external. For example, fictional reference. How could the term “unicorn” have a meaning when there are no such things as unicorns? This question is said to confuse denotation with meaning. The meaning of the shape “unicorn” is an assemblage of neural structures (the ideas of bloated pony, tapering barely-twist horn, etc.). So “unicorn” does have a meaning. But it has no reference. (Assuming there are in fact no such things as unicorns existing independently of people’s ideas of them.)
Much more could be said about this endlessly fascinating subject. But the idea is to get some notion of the concept of the symbol as Turing used it in 1936 when introducing his Turing machine design when he said:
“Computing is normally done by [a human] writing certain symbols on paper…”
The Turing machine
Turing then continues:
“We may now construct a machine to do the work of this [human] computer…”
associative-ai.com