Classic Symbolic AI uses conditional logic – IF…THEN… statements and their equivalents – to define the behaviour of the machine. The programmer writes the conditional statements. These can be regarded as having the form: IF INPUT = “A” THEN OUTPUT = “B”. For instance, IF INPUT$ = “Hi there, how are you?” THEN OUTPUT$ = “Fine thanks, and yourself?”. The programmer knows how the world works, so knows that B is an appropriate intelligent response to input A. The knowledge is in the programmer, not in the machine.
The conditional computer program line is a simple juxtaposition of possible input with possible output. When an A comes along in the input and a B is then emitted as output, nothing happens between A coming in and B going out except simple causation. There is no mediation by knowledge or comprehension or thought. B is simply emitted as an immediate causal consequent of the execution of the program line IF INPUT = “A” THEN OUTPUT = “B”. The machine itself knows nothing, is not intelligent.
Intelligence without representation
This problem of human-created conditionals is not just a matter of symbols, things that “represent” other things. Rodney Brooks with his famous robotic insects explains that there are no symbols – no representation – in his robot insects. I.e., there is to some degree genuine intelligence but without representation.
The robot insect has legs and feelers. It walks over smooth ground and rough ground. When the feelers detect rough ground they send a signal (not symbol) to the insect’s little central control unit, and in response the unit sends a signal to the legs to lift higher. When the feelers detect smooth ground the signal to the legs makes them lift the lesser distance.
What is wrong with this picture?
The intelligence is not in the insect. Why do the legs lift higher over rough ground? Because the human electronics expert, the insect designer and builder, knows what signal the antennas emit when they detect rough ground. The human builder knows that legs need to lift higher over rough ground. The human knows what signal needs to be sent to the leg to get them to lift higher. And the human simply wires the control unit so it associates the feeler signal with the appropriate lift signal. The human creates the right input-output conditional. The insect lifts its legs higher over rough ground because of human knowledge: the human knew about the world. The insect knows nothing. It’s as dumb as a slab of week-old road kill. The intelligence is in the human.
Conditional logic is an output
So the conditional form of programming (or wiring little insect control units) is a way for an observer to use their knowledge to define the causality of the machine.
Conditional programming is an output of the human brain. Well, fine. But why should anyone assume that the human brain itself, internally, works on a gazillion (or any) conditionals? Why think that the internals of human intelligence operate on a program full of conditionals?
Certainly, the input to the brain plays a causal role in determining the output, for the organic creature who survives in the wild. There is stimulus and response, but why by running input against rules that merely juxtapose possible input with possible output? I’ve suggested before that the compare, or matching, step that is implicit in the conditional operation seems to be incompatible with the idea of abstraction and of general intelligence.
Maybe the idea of conditional processing is a poor concept to use when trying to understand input-output processing within a human brain. Maybe other concepts are better.
An associative concept
The conditional program line is an association – possible input is paired with (associated with) possible output. The associative idea that I want to argue for in this Web site is definitely not this sort of human-created computational association. The association that this site is intended to promote is not not a human-created pairing. It is not an output of the human mind. Quite the reverse. It concerns the input to perception. It’s an association not between shapes created by musculature of a cognate observer, but it’s an association in nature that is detected be sensory apparatus.
The association I want to talk about exists in nature whether or not cognate beings exist. It’s this natural association that I want to argue is fundamental to perception. That it’s the coal face of perception, and its a relationship that needs to be well understood in order to understand how a computer might perceive.