The common AI view says that if a machine passes the Turing test then it can reasonably be said to have a human-like intelligence. But it might pay to be more specific.
The Turing test
The Turing test (TT) described by Turing in 1950 tests language comprehension. An interrogator asks hidden contestants questions. The interrogator knows one of the contestants is human. If the answers from both are good, sensible, then the interrogator concludes that both contestants can understand the questions, and by inductive inference, the language in which the questions were asked. Then there is an argument:
- If a system understands a language then the system is intelligent
- the system does understand the language
- therefore the system is intelligent
The TT fails to establish the truth of the key premiss
However, the Turing test fails to establish the truth of 2.
Firstly, there is an inductive inference from the answers received by the interrogator to the language as a whole. The longer the questions and answers continue, possibly the more robust the inference, but it’s still an inductive inference and hence probabilistic.
Secondly, a programmer can program the computer contestant with conditionals that mandate the the machine’s competent answers, and the comprehension is inside the programmer, not inside the machine. The programmer uses their knowledge of the meaning of words to define what the machine’s responses will be.
Turing’s counter to these issues
Turing seeks to overcome these problems by imposing a minimum time limit: if the questions and adequate answers continue for a certain period of time, then the machine can reasonable be said have human-like intelligence. But this move is also probabilistic.
In principle a programmer could program a machine to give good answers for a long period of time, even very long. A minimum time limit does not provide any theoretical reason to accept the Turing test as a conclusive test of intelligence.
Suppose there were intelligent machines that could write programs a gazillion times faster than a human. The programmer (now a machine) could write a very large program that could run inside the machine being tested, and the machine being tested could give good linguistic responses for quite some time, but the machine being tested is a mere automaton whose every response is dictated by the knowledge of the programmer.
The minimum-time parameter is at best a papering over of what seems to be the fundamental problems that (a) a computer only does what its programmer programs it to do, and (b) sensible responses to questions are only an inductive indication of general language comprehension.
Algorithms and structures
It would be much better to know what algorithms and structures are needed for language comprehension, and then it would be possible to confirm that the machine being tested is running those algorithms on those structures. But the Turing test avoids this definitive approach.
Third problem of the Turing test
A third problem with the TT is that it could not possibly work on the equipment existing in 1950 because (a) Teletype machines do not process symbols (they are only printed on keys and paper output (or screen), (b) electronic digital computers do not process symbols, nor did the computers of Turing’s day have visual perceptual apparatus and so could not see the interrogator’s questions.
For Turing’s 1950 machines, the TT does not test linguistic competence. For today’s machines, none exist that can be pointed at questions and then competently respond to them. If such a computer were developed, passing the Turing test would be indicative only of general linguistic competence. Also, then, the intelligence indicated would be only of the general linguistic type.
Chinese room argument
Fourthly, however, if Searle’s Chinese room argument is right, none of the above indicates intelligence, inductive inference to not, because computers fundamentally can’t understand anything and never will.
Value of the Turing test
So what is and has been the value, the helpfulness, of the Turing test? It’s difficult to see that it has contributed in a positive way to the project to develop a thinking machine. It did serve as a palliative and justification during decades of research during which programmers used their own knowledge and intelligence to dictate the behavior of the machine, which is a counter-productive and negative legacy.