I’ve added the following to the start of the page on Weird Theory. The addition sets out the two main claims of the theory that I want to try to adequately explain.
The (very) weird theory says that the relationship of temporal contiguity – one object following another in time as they pass a point or line or through a surface) explains the two main problems of AI:
- How a system could learn about the world by processing the intrinsically meaningless objects emitted by sensors (the severe problem exposed by the Chinese room argument), and
- How such a system could have a human-like general knowledge (the base problem behind philosophical objection to the computational theory of mind, including the frame problem (McCarthy and Hayes, 1969), the problem of common-sense knowledge (Hubert Dreyfus, 1965), and the problem of combinatorial explosion (James Lighthill, 1973).
It’s going to be fairly hard to explain this. But the basic idea to be explained is that:
(a) an inner semantic structure can’t be built from (lets call them) “symbols” alone, but it can be built from sensory symbols plus permanent records of the temporal relation of contiguity between the symbols in the sensory stream (the permanent records being called connections, or for computers, typically pointers), and
(b) Programming a computer starts at the detail level and then seeks to achieve generality by quantity of conditionals. This quickly leads to the three problems mentioned above. The relation of temporal contiguity, however, stars at the most general level, then with the increase in quantity of recorded instances of contiguity becomes progressively more detailed. Hence a system that embodies the principal of temporal contiguity starts at the general then develops detail, whereas the method of using human knowledge to define the causation of the system (by programming it with conditionals) starts at the detail level and with quantity seeks (but because the quantity of conditionals needed never truly achieves) generality.