The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death.

Here’s how Uber’s self-driving cars are supposed to detect pedestrians The only possibilities that made sense were: The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.

Autonomous vehicles have superhuman senses: lidar that stretches out hundreds of feet in pitch darkness, object recognition that tracks dozens of cars and pedestrians at once, radar and other systems to watch the road around it unblinkingly. But all these senses are subordinate, like our own, to a “brain” — a central processing unit that takes the information from the cameras and other sensors and combines it into a meaningful picture of the world around it, then makes decisions based on that picture in real time.

This is by far the hardest part of the car to create, as Uber has shown. It doesn’t matter how good your eyes are if your brain doesn’t know what it’s looking at or how to respond properly. Read more from…

thumbnail courtesy of