Consider Wired.com’s 8 April 2018 article “The Never-Ending Self-Driving Car Project” also titled “When Will Self-Driving Cars Be ‘Ready’?”. Wired wants sponsorship from the self-driving vehicle industry, so to some extent it needs to lick the butt of the industry. With that in mind it’s not completely surprising that the latest article on self-driving cars is a masterpiece of wily spin. Yet at the same time it reveals important facts. Para 3 explains:
…people want to know when autonomous vehicles will get here, when they will be ready. Here’s the unsatisfying but correct answer: never.
This seems dire, but then wired explains:
“The technology is constantly being updated,” says Nidhi Kalra, a roboticist who co-directs the Rand Corporation’s Center for Decision Making Under Uncertainty. “Sometimes we will talk about it as if, ‘We have this self-driving car, we have this product.’ But with software updates, there’s a new vehicle every week.”
So is there a problem? Not really…
This is what differentiates the autonomous vehicle from even the most advanced cars rolling off the production lines in places like Detroit: so. much. software.
The software needs to be frequently updated, like most software does…
And just like your iPhone, your Snap app, or your Tesla, these cars have code that will get updated, and updated a lot.
And this nothing to worry about. In fact it shows self-driving software providers are responsible:
“Any product is going to be improved over time,” says Mike Wagner, co-founder and CEO of Edge Case Research, which helps robotics companies build more robust software. “That’s life-cycle maintenance in any system.”
So what sorts of updates are we talking about, here, for self-driving cars?
If, say, Waymo wants to expand a (theoretical) taxi service from this neighborhood in Atlanta to that neighborhood in Atlanta, it will need to update its software.
Okay, but I thought the car was controlled by AI – artificial intelligence. Why is a software update needed if a self-driving taxi is going to drive in a different neighborhood?
Well, maybe it needs a map of the new neighborhood. But wouldn’t it already have all the maps it needs, in fact maps of the whole region or even country (like I do, in my car’s glove box)?
Anyway, reading on…
If, say, General Motors wants to start offering autonomous vehicle riders the chance to make mid-trip pit stops at Starbucks, that’s a software update, too.
Whoaah! Wha? That needs a software update too! The system is supposed to be intelligent. In the (actually intelligent) human-driver case, this is what happens: to get the cab to stop at Starbucks, the passenger simply asks the human taxi driver to stop at Starbucks. As an equivalent, in the AI-controlled cab the passenger could simply type in “Please stop at Starbucks on <name goes here> Street”, or voice recognition might be good enough. But no software update is needed to do this. The AI system’s existing software, if the system is intelligent, would easily handle the request of where to stop. If the AI-cab is intelligent – even minimally intelligent – it would surely already have the software needed to action the request.
But anyway, the AI-cab does need a software update for some reason. And needs lots of them. What is the equivalent human case? The human software update is not a case of learning from a book or being told something. A human could look at a street map and find how to get to the right street, or could ask someone. That’s a standard type of human learning. No human software update is needed for that. The human already has all the software needed to do that, and in fact to do all learning. The human system is already capable of general learning. So in fact humans don’t need any software updates.
But suppose humans did need them. What would a human software update amount to? Well, what happens in a computer software update? A programmer, an external cognate entity, an outside observer with his or her own knowledge, decides to make causal changes to the computer’s software based on what the observer knows.
What is the human equivalent of this? A brain surgeon opens up the skull and re-wires the human brain. That changes its causality. So the human equivalent of the self-driving software update is brain surgery. But humans don’t need brain surgery to learn things. Why then does the AI self-driving system need the computer equivalent of brain surgery?
Or to put the question another way, Is AI self driving software really intelligent or is the claim of intelligence simply disingenuous fiction spouted by latter-day snake oil vendors who have found a huge new stream of income?
And if the claim of intelligence is humbug – and it is – AI self-driving systems are really dangerous.
Anyway, reading on…
If, say, any autonomous vehicle built five years ago wants to work today, it needs an upgrade—there will be new car models to recognize, new traffic patterns to negotiate, maybe new, climate changed weather patterns to contend with.
So the AI system needs the equivalent of human brain surgery whenever anything new comes along. I suppose this need creates a wonderful income stream for brain-surgeon-equivalents, i.e., programmers. I wonder whether programmers really want a machine to learn by itself. Intrinsic learning would imply a much-reduced need for programmers. Hmm.
But reading on…
More than half a million lines of code will power the various systems and algorithms that could one day help self-driving cars go anywhere. That includes localization systems, overlaid with high-definition maps to help the vehicles understand where they are. And perception systems, which help vehicles determine exactly what’s going on around them (Is that really a person? Should I expect her to walk in front of the vehicle?) And planning systems, which synthesize all that info and actually chart the vehicle’s journey from this intersection to that one. Oh, and the software that actually makes the thing move without a foot to push a gas pedal or a hand to guide a steering wheel.
Okay, so existing software in self-driving cars helps the vehicles “understand where they are“. Since this Wired article is for the general public, Wired must be using the word “understands” in the common sense, not in some special technical sense that has nothing even remotely to do with human understanding. So Wired is saying that AI self-driving cars understand – in the human sense of “understands” – where they are. Well, this is totally false. Wired knows it’s false. So does all of AI research. The self-driving car doesn’t know or understand anything.
But reading on…
“The environment isn’t static,”says Forrest Iandola, the CEO of the startup DeepScale, which builds perception systems. “Even if you, in theory, have a perfect system for today in a certain location, that becomes stale.”
So the AI self-driving systems can’t learn a jot. It can’t learn anything. The programmer has to do brain surgery via software updates for everything that’s new. The AI system doesn’t have even the most elementary feature of human intelligence – the ability to learn, intrinsically learn, learn by itself. The self-driving system is not even remotely intelligent.
And what is this about perception? “DeepScale … builds perception systems“. In an article intended for general readership, the term “perception” can only mean human-like perception. So DeepScale builds computer systems with human-like perception? No. To say so is utterly false. What DeepScale makes has nothing even remotely like human perception. But readers will think it does, and the readers will be sucked into the vortex of fiction AI has been fabricating and stirring for the past 60 years.
But reading on…
Vehicles will also constantly encounter new situations on the roads, and contend with obstacles engineers might never have dreamed of. “As soon as you turn any sensor to face the outside environment, the number of different things it could see is on the order of the number of permutations of atoms you could see in the universe,” says Iandola. A bunch of tigers escaped the zoo? Time to train self-driving on tiger images—and update.
So now Wired agrees that the AI systems it is reporting about are not intelligent and require brain surgery by some external human, some cognate observer with his or her own external knowledge of the world, to give the system any chance of adequately dealing with anything new.
But then Wired reports Iandola as saying self-driving car sensors “see”. They don’t see. And they don’t perceive, and the cars don’t understand. And the Wired techno gurus and all of AI research and all the adequately informed spin meisters know it.
Finally, Wired issues a word of semi-warning about what are in fact congenitally brain-dead AI self-driving software systems:
But some of these fixes will have to happen much more quickly—if there’s a bug sending cars careening into barriers, or opening a backdoor to hackers, for example. Wagner’s company, Edge Case Research, is working on ways to speed up that process, to get important robotics safety updates proven out and patched quickly.
And what’s more:
Experts say prepping for … the constantly updated autonomous car, starts now. Smart engineers should be building in many software entry points, so they can validate separate parts of the system. Can they sort what’s going on in the sensor fusion system from the localization system? Can they quickly diagnose what’s wrong?
Of course not, and the testing costs money and due care costs money and costs reduce profit, and shareholders are not happy. And after all, shareholders being happy is the only thing that really matters. That’s why little will be done about the deceptions, the relentless hype and spin, and people will continue to die.