Three days ago NTSB federal investigators released their preliminary report into the death of pedestrian Elaine Herzberg, hit by an Uber self-driving car running in autonomous mode. Wired explains:
“The car’s radar and lidar sensors detected Herzberg about six seconds before the crash—first identifying her as an unknown object, then as a vehicle, and then as a bicycle, each time adjusting its expectations for her path of travel.”
She was pushing a bicycle across the road at night at right angles to the direction of travel of the car. Radar and Lidar are supposed to “see” over 100 metres ahead and result in the vehicle avoiding obstacles. The car did not slow down.
It was night time. Had a human been in attentive control, they would have seen a slow-moving object progressing towards the path of the car, would have known the object was not itself a car (no tail lights or headlights) or if it was a car with the lights turned off, that would have meant special danger, and the human driver would have slowed down and concentrated on the object ahead. Then the Uber car’s headlights would have clearly illuminated Herzberg and her bike in time for the car to stop or change lanes to avoid a collision. And maybe toot the horn to alert Herzberg to the danger.
Why didn’t the Uber vehicle under the control of AI software do any of these things? Part of the answer is given by the software identifying the object “as a vehicle, and then as a bicycle, each time adjusting its expectations for her path of travel“. Why was the object identified as a vehicle? A human would simply withhold judgement, slow down and probably change lanes, and wait till the object was properly illuminated.
The NTSB crash report identified that the Uber car had its emergency braking system intentionally disabled, which Uber said was to ensure a more comfortable ride. This was widely reported as meaning that false positives in the Uber AI system had previously produced emergency braking, swerving and other harsh maneuvers leading to an uncomfortable ride.
The report also says that emergency braking would have been applied (had emergency braking been enabled) by the car’s AI system 1.3 seconds before the fatal impact. One point three seconds – when the bicycle had been visible in the headlights for several seconds?
The car didn’t slow down, which is not a case of emergency braking, but just of slowing down when an object ahead, identified or not, is detected.
All the above could be excused as human failure to adequately program the AI system. But the above facts evidence a much wider and much more fundamental problem for self-driving cars and for AI.
How come all self-driving cars need some or all of radar, sonar, lidar, as well has cameras? Humans just have eyes. The answer: AI doesn’t have the foggiest clue how human vision works.
Yet AI calls its camera systems “vision” systems (as does Computer Science). By so thoroughly embracing this false terminology, AI demeans itself and weakens respect for itself – and for Computer Science. What AI calls “vision” isn’t vision at all. It’s nothing like vision. It’s software obsessively written, written in the deranged belief that, given enough processing power, computation is the answer to all problems.
This highly defective impetus can be traced back to Turing where the computer’s program is a description of the machine (of the system being simulated), and the program fully predicts the computer’s possible behaviour whatever that might be (which causes all sorts of problems such as how could learning from experience be possible?) So according to this error-ridden view, all the conditional causality of the behavior of the computer is in the program, or computation.
According to Turing, the universal Turing machine can simulate any system that can be quite accurately described, and the program (the description) defines the behavior of the universal Turing machine, so everything comes down to the program, or software – the computation. What a major error.