Elon Musk just tweeted about the upcoming release of “Autopilot” software that controls Tesla cars:
“With V9, we will begin to enable full self-driving features.”
The Verge reports:
This would appear to put Musk on track to achieve the promise he made two years ago to offer “full self-driving” capabilities to Tesla owners by 2019
“We will begin to enable full self-driving features.” Yet where is the evidence that Tesla hardware is sufficient for full self-driving? Tesla’s Autopilot system, and other self-driving systems, are hodgepodges of cobbled together work-arounds based on no coherent theory of perception, and the hardware lacks similarity to human – and other higher animal – sensory apparatus.
Is it possible that evolution got it right? That it’s a fool’s errand using sonar, radar, lidar, GPS, inertial navigation, and very detailed and immediately out of date “maps”? Perhaps only one sensor is needed – human-like vision.
It seems strange that with all the computing power now available, and with all the very large amounts of memory now available, that there is no strong effort to understand human vision in the terminology of Computer Science and to build artificial systems that work in the same sort of way.