In 1969, mathematician and early AI leader (and principal organizer of the 1956 Dartmouth College summer workshop on “Artificial Intelligence”), John McCarthy, along with computer scientist Patrick Hayes, said,
“A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which its inputs are interpreted. Designing such a program requires commitments about what knowledge is and how it is obtained. Thus, some of the major traditional problems of philosophy arise in artificial intelligence“. (“Some Philosophical Problems from the Standpoint of Artificial Intelligence”, available online.)
In 1992 famed American philosopher John Searle said,
“…we have an impressive set of electronic devices that we use every day. Since we have such advanced mathematics and such good electronics, we assume that somehow somebody must have done the basic philosophical work of connecting the mathematics to the electronics. But as far as I can tell, that is not the case“, (The Rediscovery of the Mind, The MIT Press.)
In 2012 renowned Oxford physicist David Deutsch discussing AI said,
“I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors … The lack of progress in AGI is due to a severe logjam in misconceptions … The whole problem of developing AGI is a matter of philosophy.” (The Guardian, 3 October.)
So it’s not as though AI leaders and leading philosophers haven’t recognized the need of cross-pollination between philosophy and AI. But it hasn’t happened. Us humans are now being creamed by AI systems controlling cars on the public roads, which systems make basic errors no attentive human would make. It seems appropriate, and there’s probably even a duty to try to help out.
Deutsch mentions “a severe logjam in misconceptions“. What are the misconceptions? I’d like to do a series of posts that analyze AI’s key concepts, and hopeful to clearly explain the deep errors (at least I’ll argue they’re fundamental) that the concepts contain in my view.
The concepts will be: symbol, computation, sensory stream, information, Turing machine, knowledge, perception, and a few others. I thought it useful to do a post per concept. There’ll be a bit of an emphasis on the Chinese room argument, which argument I think, though wrong, clarifies quite a few important issues. The next post will probably be on the symbol. I mean you probably want to start at a basic idea then go from there.
associative-ai.com