Read the draft paper Learning Chinese, also at Academia.com. It’s taken a while to work out how to say what I want to say in a more or less complete way but Learning Chinese feels like progress.
Themes of the paper:
- AI uses concepts developed from the human use of the device called a “computer”. The name of the device comes from the human use. The idea that the machine computes on what it processes comes from the human use. The fiction that the machine processes symbols comes from the human use. The Chinese room argument (CRA) including thought experiment is based on these concepts that are derived from the human use of the device.
- However, the only place symbols occur inside a computer is printed on components or on the motherboard. And of course the reason why they are printed there is so that humans can see them. Outside of the computer proper, symbols occur merely as encrustations on exposed surfaces of attachments: printed on the caps of keys of a keyboard, fused into paper or sprayed on paper by a printer, or as activated pixels in a display. All this is for human use. That’s why the shapes are encrusted on external surfaces – so humans can see them.
- Computers don’t process symbols, and what they do process have no meanings of any sort, neither intrinsic nor extrinsic. The Chinese room thought experiment says that what computers process have an extrinsic meaning but no intrinsic meaning. This is wrong.
- One approach to a better understanding of the computer is to abandon the concepts of symbols and computation. Rather, think about computers as processing units of substance (which units can have values of properties) which units are related together. Then to consider the processing the machine performs as reaction to the substance per se, to values of properties of the substance, and to relations between the units of the substance.
- The paper pursues this approach, and concludes that the CRA premise, computers are purely syntactic devices, is false, and further, that computers can create, modify and delete relational elements that along with the units of substance so related, can build inner semantic structures. Another conclusion of the paper is that a type of such structure is based on a principle that seems to offer inherent generalization.
associative-ai.com