AI controls heavy vehicles on city roads. It seems really appropriate to critically evaluate AI’s claims to have created human-like intelligence (or better) including deductive reasoning, either in a narrow or general sense.
It’s cumbersome to talk about what computers process without adopting the myth that computers process symbols. Since semantics isn’t at issue, I’ll adopt it below.
What does AI mean by “deductive reasoning”? Basically, modus ponens, or logical implication, also called material implication, symbolized:
P → Q
Meaning P implies Q; or expressed in the conditional form: If P then Q.
- All men are mortal
- Socrates is a man
- Therefore Socrates is mortal
For computers, a typical approach goes: deductive logic is not concerned with meaning but is purely syntactic. There are a set of rules, for instance Modus Ponens, and as long as these are followed, if the premisses are true then the conclusion by necessity (logical necessity) is also true. What the premisses are about is irrelevant. Thus, the above syllogism can be reduced to the form:
- All X are Y
- Z is an X
- Therefore Z is Y
Put whatever you like for place holders X, Y and Z.
This deductive reasonng is symbolic, syntactic, and that is just what computers are brilliant at doing – symbol manipulation according to rules. The computer doesn’t have to understand what the symbols being manipulated mean. It just has to be able to identify their shapes and then follow the rules about symbols of that shape.
(Here, “follow” means to operate automatically as designed. There’s no suggestion that the machine has to understand what the program code means.)
This conditional symbolic approach to programming a computer is called Symbolic AI, or Good Old Fashioned AI (GOFAI) and was the first strong methodology of AI after its debut in the mid-1950s. When Turing in his seminal 1950 paper “Computing Machinery and Intelligence” said “…the problem is mainly one of programming” he meant programming as per Symbolic AI.
The above syllogism can be expressed:
- If Z is an X then Z is Y
- Z is an X
- therefore Z is Y
If Socrates is a man then Socrates is mortal. Socrates is a man therefore Socrates is mortal. This conditional is a rule. ‘Socrates is a man’ is the input, and following the rule, the output is ‘Socrates is mortal’.
Input-output model of programming
The conditional, the main form of deductive reasoning to AI, is the core of the programmed input-output conception of the computer. For robotics including self-driving vehicles, sensor input come in, human-created rules are applied, then effector output (as prescribed by the rules) goes out. This is the sensor-effector conception of the programmable digital machine. Which is widely embraced not just within AI. Take strong opponent, John Searle:
“On this view [being the view Searle calls Strong AI], any physical system whatever that has the right program with the right inputs and outputs would have a mind in exactly the same sense that you and I have minds.” (Minds, Brains and Science, p. 28)
In this “Symbolic AI” programming, ‘deductive reasoning’, as noted, typically means conditionals. The classical form of the conditional statement in programming is the IF..THEN… statement, also IF…THEN…ELSE…. (Some computer languages, for example some assembly languages, do not use IF..THEN… but other forms of the conditional, for instance cmp (compare) followed by je (jump if equal) or jne (jump if not equal).)
Deep Blue, the IBM computer that beat Garry Kasparov at chess in 1996 ran a huge number of conditionals to “brute force” all possible next moves, that is to run through every possibility for a set number of possible future moves (though there was, apparently, some human tweaking by a chess grand master, too).
Conditionals in program code typically have the generic, or pseudo-code form:
IF THE INPUT = “A” THEN THE OUTPUT = “B”.
The symbol between the first set of quotation marks is an example of a possible input symb0l (which, when the conditional executes, may never actually turn up in the input). The symbol between the second set of quotation marks has the shape of an actual output symbol (if there is no such output symbol of that shape then the code is in error).
The above program line is a rule. For whatever comes along in the input, the first half of the above line of code executes. But when an “A” comes along, the second half also executes and a “B” is emitted as output. (This need of the first half of all such conditionals to execute, no matter what is the current input symbol, is one of the big problems of Symbolic AI, and is the basis of the frame problem and the related problem of combinatorial explosion.)
All conditionals imply a compare operation. The shape of the current input symbol is compared to what is between the first set of quotation marks. If there is a match, then the rest of the line of code executes (and a B is ejected as output). If there is no match, then the rest of the line does NOT execute, and program execution drops down to the next conditional.
So when the AI Autopilot software of a Tesla car failed to identify the white side of a large truck as a large close object (and not bright sky) and the car drove at speed under the truck decapitating its driver, Joshua Brown, the software was engaged in some sort of deductive IF…THEN… reasoning. Artificial neural nets (ANNs) might have been involved, however, the logic was IF…THEN… . The software failed to match the sensory input in a conditional that would have avoided the accident. What AI calls ‘deductive reasoning’ killed Joshua Brown.
When an Uber Volvo CX90 sports utility drove at speed into pedestrian Elaine Herzberg killing her, the Uber autonomous software was also engaged in deductive reasoning. The conditional whose output would have avoided the accident did not produce the needed output. Deductive reasoning killed Elaine Herzberg.
Human deductive reasoning
What we would normally call human deductive reasoning seems quite different from what AI means by the same words. One might say an example of human deductive reasoning is: If you drive into someone at speed you will probably seriously injure or kill them. Various premisses are missing from this, so the syllogism is an enthymeme, but it embodies what most people would probably call deductive reasoning.
Computer ‘deductive reasoning’
I’d like to suggest that “Deductive reasoning” in a computer has two key features.
(1) Causal. It comprises conditionals – IF..THEN… type operations – which simply associate a sample of possible input with a sample of possible output. This is an ordered pair. What happens causally when the conditional executes is very simple. If a match occurs on the input side, immediately the output is produced. There is no mediation of any other process (for instance consciousness, thought, or understanding). It’s an automatic, simple and direct causal step from input to output.
(2) Epistemic. In creating the conditional line of code, a human decides what possible input is paired with what possible output. Why in the conditional statement is sample output B paired with sample input A? Well, the human knew stuff about the world. The human decided that action B is an appropriate response to situation A. It’s the knowledge inside the human that determines which sample of possible input is paired with which sample of output. There is no knowledge inside the machine. The knowledge is inside the human, the observer. The knowledge is extrinsic to the computer, as is the understanding.
The problem with a human deciding the causation of the machine, a human writing a program of conditionals based on what the human knows about the domain the machine will be dealing with (e.g. a board game, e.g., the sensible world), is that with complex domains such as the real world, this produces the classical and as yet unsolved severe theoretical problems of AI including the frame problem, common-sense knowledge problem and the problem of combinatorial explosion (which are all linked to generalization).
These theoretical problems of AI probably won’t all be solved by increased computer power. For example, the problem Searle presents, the Chinese room argument, says that for fundamental semantic reasons no symbol processing device, no matter how powerful, will ever think. A computer with infinite power might solve the combinatorial explosion problem, but according to Searle, it could never solve the Chinese room argument problem.
In a subsequent article, “What does AI mean by ‘Knowledge'”, I’d like to suggest a way to use a digital computer that avoids these problems of AI, which way also suggests a solution to Searle‘s Chinese room argument and Stevan Harnad‘s symbol grounding problem.
Intelligence not a mass of conditionals
According to this idea that I want to present later, human intelligence is essentially NOT a mass of conditionals. Intelligence is not realized as a brain-full of neural embodiments of IF…THEN…s. Intelligence is fundamentally not a huge program.
One problem with conditionals is the matching step. An actual input is compared to samples of possible inputs contained inside the rules (which samples might be in a database), and when a match occurs the output side of the conditional executes. If no match happens, then the next conditional inside the loop executes. If no match again, then the further next conditional runs, and so on. With the feral robot (e.g. self-driving car) – one that must survive in the wild – there are so many possibilities that combinatorial explosion quickly kicks in. It’s an old paper, but a good one: see Daniel C. Dennett‘s “Cognitive Wheels: The Frame Problem of AI“, in Zenon W. Pylyshyn’s The Robot’s Dilemma.
Alternatives to matching
Executing conditionals is not the only way to respond to a situation. On thinking about it, it seems that fundamentally, generalization cannot be a matching process. Matching in an important sense seems to be the opposite of generalizing. So this is the question I want to try to answer: What sort of computer processing is it that can produce adequate responses to situations without executing conditionals?
A human might explain an action as resulting from a decision arising from a conditional, might explain it as an item of what can be called human deductive reasoning. But this explanation might be epiphenomenal. Maybe no such inner conditional processing takes place. Inner processing could be of an entirely different sort. Not conditional at all. I will argue that, happily, computers can perform a sort of non-conditional processing that seems to avoid combinatorial explosion and also other classical problems of AI, and also the Chinese room argument.