Artificial General Intelligence
I’ve just uploaded a draft paper – an (alleged) rebuttal (sort of) of the Chinese room argument. It’s not the usual sort of reply in that it says, sure the CRA is correct, but only as far as Turing’s concept of the computer goes (as a device that internally manipulates symbols). But there are other concepts.
The CRA fails to address these alternative understandings, and at lest one could allow the creation of an inner semantics. Hence the conclusion of the CRA, properly interpreted, is not that computers will never think. Rather, it is that Turing’s conception of the computer (so called – by Turing) will never think. It seems that at least one other accurate conception of the electronic device could allow the acquisition of an internal semantics. Thus (if true) AI can keep its machine, but only if it rejects Turing.
The the answer to all of AI’s legion woes must be, then, don’t throw out the baby (the machine) with the bathwater (Turing). But definitely, with respect, biff the swill.
associative-ai.com