I’ve just published my page on John R. Searle’s Chinese room argument (CRA). A gazillion enthusiasts have offered (alleged) rebuttals of the CRA.
Yet given so much commentary partly or fully for or against, the jury on the CRA is still out. The different sides (mainly AI and those-other-than-AI) have retreated to entrenched positions. A generally agreed solution seems distant.
So for what it’s worth (I think it’s worth something, but then I would): this is what I think:
- Searle is right when he says symbols (tokenized shapes that have meanings, eg Chinese ideograms) are semantically vacant, that they contain within themselves no indication of what they mean or refer to.
- The fact that symbols are semantically vacant is irrelevant to the quest for artificial intelligence because computers do not process symbols.
- However, the CRA is valuable in that it shows that the semantics of the thinking computer will derive not from any putative referential quality of what computers do in fact process, but will derive from something else inside the machine.
- According to Searle, there is nothing else. But this is wrong. As well as what they process (tokenized digital values), computers can contain in memory relationships between stored instances of the things they process. Such relationships can be realized in, for example, pointers, which are not input-output digital values but items that relate internal memory locations together.
- Tokenized digital values plus stored relationships between them can equal a semantics.
- This is because input streams from digital sensors contain not just the tokens emitted by the sensor (which tokens are also tokenized digital values) but also instances of a relationship between these tokens. This is the relationship of temporal contiguity. The tokens plus the relationships (via a couple of very simple algorithms) can build a semantic structure comprising both stored tokens and stored versions of the temporal relationship. Hence the stored token is a component of the internal semantics and is not by itself sufficient.
- The CRA says that the token is by definition the only thing available inside the machine and therefore if there is a semantics inside the machine then the token must be sufficient. But Searle says the token has no semantics, so there cannot be a semantics inside the machine. But it is not true that the token is the only thing available.
- The CRA fails because it fails to understand the importance of (or even allow the existence of) relationships between, in its terminology, symbols.
associative-ai