What is full self-driving functionality?
A relevant question arises from The Latest Lies of Elon Musk Issue 1, Vol. 1: What does “full self-driving features” mean?
Musk says: “With V9 [of Tesla Autopilot software], we will begin to enable full self-driving features.” But there is no evidence that the Tesla sensor set is good enough for general safety of unrestricted driving on the public roads.
The only thing that seems to make good sense is:
Full self-driving functionality is the functionality a responsible skilled human driver has. The fully autonomous car will react the way a skilled responsible human driver would react.
You often see the claim that AI software will make autonomous cars much safer than human drivers generally. This is quite wrong. Computerized system would only rarely be safer than a responsible skilled human driver. Accidents are caused by humans who don’t have the skills, are under the influence, are over-tired, and so on.
Also, everything about road safety has been designed for human perception. Vehicles controlled by a completely different type of sensor system, very different from human perception, is dangerous in itself.
Humans make a lot of assumptions (validly make assumptions) about vehicle behaviour, based on years of experience of the behaviour of vehicles controlled by humans, and these assumptions define expectations. But AI-controlled cars behave differently from human-controlled ones. Put a different set of behaviors in the driving seat, and there are going to be accidents.
So what is full self-driving functionality? It’s the functionality that responsible skilled humans have. (And that Teslas – and other ‘AV’s – don’t have – and don’t seem to have any chance of getting in the near-to-medium-term future.)
How will we know when a car has full self-driving functionality?
How do you tell when AI software and the available hardware will produce the behaviour of a responsible skilled human? Testing isn’t enough. This will never adequately cover unusual situations, called ‘edge cases’, and these are the cases where human-like perception and skill is most important, especially generalization.
The principles of human perception need to be identified and understood, and it needs be quite clear that these are adequately realized in the self-driving software and hardware. In other words, for confidence empirical evidence isn’t enough, also we need to be sure the theory is right.