The debate over AI law
AI law. The debate is sure raging (thanks to the AI industry): what legal framework must we have in order to deal with artificial intelligence, which, as we all know (thanks to continual assurances of the industry) is just round the corner? Lawyers, judges, legislators need to get busy now and craft common law and legislation for the advent of AI systems, which, as we all know, is just around the corner. Now is the prudent time to create AI law.
The real reason
The real reason the industry is pushing for AI law now? It wants advantageous legislation in place before the pile of corpses surpasses the summit of Mt Everest.
Why? Because AI is not AI, and AI knows it.
Systems called “AI systems” don’t have any human-like intelligence, and the AI industry is fully aware of this. The causality of the systems is determined by human designers, programmers and “trainers”, just like the causality of vehicle airbags is determined by humans. And when the airbags explode and injure passensgers, the company is liable, not the airbags. This is no different from current AI.
Humans decide how AI machines will react. Humans decide how chunks of lidar data and sonar and radar and video data are categorized and processed. Humans – and only humans – are responsible for the behaviour of “AI” systems. The machines know nothing.
But hey, if legislators can be conned into believing the machines are cognate entities, well, hey, the manufacturer could hardly be held liable for what the machine does. The programmer could hardly be held liable for the behaviour of the AI system. I’m not liable for what some other human freely chooses to do. Why should I be liable for what a cognate machine does?
That’s the con – trick legislators, get legislation rushed through before too many innards hit the fan. After all, what other options does the AI industry have? It has to act in the interests of its shareholders.