I’ve just added some paragraphs to the start of the page on Weird Theory. Changed paragraphs are below.
The (very) weird theory says that the relationship of temporal contiguity – one object following another in time as they pass a point or line or through a surface – explains two important problems of AI:
- Perception. How a system could learn about the world by processing the intrinsically meaningless objects emitted by sensors (the severe problem exposed by the Chinese room argument), and
- Generalization. How such a system could have a human-like general knowledge (the base problem behind philosophical objection to the computational theory of mind, including the frame problem (McCarthy and Hayes, 1969), the problem of common-sense knowledge (Hubert Dreyfus, 1965), and the problem of combinatorial explosion (James Lighthill, 1973).
It’s going to be fairly hard to explain this. But the basic ideas to be explained are:
(a) Building a semantic structure. An inner semantic structure can’t be built from (lets call them) “symbols” alone, but it can be built from sensory symbols plus permanent records of the temporal relation of contiguity between the symbols in the sensory stream (the permanent records being called connections, or for computers, typically pointers), and
(b) Embodying generalization. Programming a computer starts at the detail level and then seeks to achieve generality by quantity of conditionals. This quickly leads to the three problems mentioned above. The relation of temporal contiguity, however, starts at the most general level, then with the increase in quantity of recorded instances of contiguity becomes progressively more detailed.
Hence a system that embodies the principal of temporal contiguity starts at the general then develops detail, whereas the method of using human knowledge to define the causation of the system (by programming it with conditionals) starts at the detail level and with quantity seeks (but because the quantity of conditionals needed never truly achieves) generality.
(c) Content of a sensory stream. Assuming computers will think, everything they intrinsically learn about the world is going to come to them from their attached sensors. Specifically, it’s going to embodied in the stream of objects the sensors emit into the internal world and send to the computer, i.e., sensory streams. So what do sensory streams contain? Searle and Chinese room say: just symbols. But in fact they contain three types of “thing”:
- the objects emitted by the sensor (which Searle calls symbols),
- the relationship of temporal contiguity between these emitted objects, and
- repetition of instances of the relationship.
The issue is to explain how an internal semantics could be built from 1, 2 and 3.