Bit of a hiatus since the last The Latest Lies of Elon Musk. That was issue 8 last year. There are just so many lies that one gets overwhelmed. What counts as a lie, here, though, includes what Musk knows is ludicrously optimistic and/or technically impossible at present.
The latest lie of Elon Musk, or one of the latest anyway, features the Autopilot subsystem Smart Summon:
…Tesla is now more than eight months overdue on technology to enable cars to drive in parking lots—a feature that has a projected top speed of 5mph (8 km/h).
So Smart Summon is coming sometime soon? Here’s a prediction: a half-competent or better Smart Summon will never come. We’ll get excuses, then some half-baked pseudo-autonomous kludge will be foisted on Tesla drivers and the public at large, which spaghetti of work-arounds will never be able to navigate busy parking lots without liquidizing infants and ring-barking trees.
One theory about Smart Summon doing the rounds actually seems to make a little bit of sense. Mr Musk is using the parking lot environment, along with Smart Summon’s snail-like speed, to test and iron out the legion bugs in Autopilot in order to bring it up to full freeway self-driving.
This idea has problems. First, a busy supermarket parking lot is a really complex microcosm of chaos with non-standard road markings and signage, faded road marking, faded signage, contradictory road markings, reserved spaces, unmarked or unusually marked pedestrian crossings, road surfaces of bitumen, concrete, paving stones, bricks; sometimes even wood, dirt, grass, sea shells, marble chips or road metal. And that’s just for starters. Compare that to the freeway.
Second, also unlike the freeway, a parking lot is equally a pedestrian environment as a vehicular environment. And pedestrian families abound there. Children run out from behind SUVs, shoppers push laden trolleys right at you daring you in your brand new Tesla to run them over, drivers are about to open doors, vehicles are about to back out. Trees, bushes, cones, barriers of numerous types might be anywhere. Current Smart Summon has trouble distinguish colors of road surfaces and shadows.
A busy parking lot obviously needs close attention to seemingly small things: a puff of smoke from an exhaust pipe, a reversing light coming on, a car door closing, a car door opening, a noise of a trolley, a noise of a horn, etc., etc., etc. It needs real care and it needs dealing with uncertainty, and most of all – it needs anticipation about possibilities usually absent from freeway or city driving. Certainly, some of these myriad issues might happen in city driving, but the busy parking lot environment is really complex.
Yet even so, surely the parking lot plus its families have value for field testing needed features to make Autopilot autonomous. Just think legally. Much better to flatten infants at 4 miles an hour than 60. And perhaps much better to flatten them on private property than a public road.
So you’d probably have to say there’s possibly some merit in testing Autopilot features in parking lots vis à vis public roads. But there’s a key problem with the “deep learning” approach used to train up artificial neural nets used by current AI in self-driving systems.
A net is usually trained on a certain specific type of item: curb, road surface, tree, pedestrian, car, ambulance. There are so many different types of thing in the parking lot that the task of solving the self-drive problem with deep learning seems intractable, and the difficulties of fusion increase exponentially. More fundamentally, this seems the problem: human learning and decision making obviously aren’t deep learning processes. For a start, there’s no such thing in humans as back propagation. And ANNs are brittle.
In any case, with full self driving there really wouldn’t be a need of a summon feature. The main system will simply do whatever is needed for whatever driving purpose. So this is the prediction about Smart Summon: nothing like a fully autonomous version will ever be released.
The bigger issue: Autopilot is wildly unsafe. It’s not a matter of inadequate hardware. It’s not a matter of that. It’s a matter of theory. The AI research project doesn’t know how human-like perception works, and human-like perception is needed.
Why, for example, do Teslas have 8 cameras, 12 ultrasonic sensors and a radar? What’s wrong with two cameras like us mere humans? In a sense Musk is right – lidar is beside the point. But so is 8 cameras, ultrasonics and radar. I’ve got a fossil trilobite half a billion years old. It’s got dozens of cameras. The trilobites’ environment was really simple. We have two cameras. If Autopilot had anything like human perception, surely it would have two cameras, too.
Well, we (us AI software engineers, or someone, somewhere, probably China) will just need to work out then apply the core principles of human perception. Until then, Autopilot including Smart Summon is little better than a study in criminal negligence.