Autonomous Vehicle Malfunctions May Not Be So Complicated After All

NTSB’s Final Report on Pedestrian Fatality Involving an Uber AV Highlights Obvious Programming Missteps

On a dark street in Tempe, Arizona just before 10 p.m. on March 18, 2018, an Uber vehicle being tested in autonomous mode hit and killed a pedestrian.  This was the first pedestrian fatality involving an autonomous vehicle, and it triggered a media firestorm that caused Uber to suspend its autonomous vehicle program for nine months as it worked with the NTSB to understand the causes of the crash.  With the adoption by the NSTB of its final report on the crash on November 19, that work is now complete.

The NTSB’s final report paints a vivid picture of programming and human missteps that belies the argument commonly advanced in legal scholarship about AV liability — that crashes involving AVs will be impossible for the judges, juries, and doctrines that make up our current system of tort law to “understand.”  Indeed, the errors that led to the crash were all too simple.

Each of the autonomous vehicle fatalities that have occurred so far have involved several overlapping missteps, and the Uber crash was no different. It did not take long for investigators to discover that the pedestrian, Elaine Herzberg, was high on methamphetamine and marijuana when she jaywalked across an unlit portion of the road.  She did not react in any way to the approaching car. (NTSB, Human Performance Group Chairman’s Factual Report, HWY18MH010, Nov. 5, 2019, at 16.)

The Uber’s safety operator, meanwhile, was plainly not paying attention to the road and did not see Herzberg until a fraction of a second before the crash; police investigators determined that she had been watching The Voice on her phone.[1] Much of the NTSB’s final report focuses on this flagrant lapse and the underlying corporate culture that helped make it possible.[2]

But the performance of the algorithm itself initially presented something of a puzzle.  The car’s various sensors detected Herzberg five and a half seconds before the crash (plenty of time to slow down), but the algorithm struggled to decide what exactly it was seeing as it sped toward the unknown object, and did not appreciate the need to slow down or change course until a second before impact, by which time it was too late.

Many scholars and commentators have argued that applying traditional principles of tort law to accidents involving autonomous vehicles is undesirable or impractical, in part because algorithms will be impossible for regular people like judges and jurors to understand.[3]  Algorithms, after all, are not programmed as a series of if-then instructions.  Instead they are, broadly speaking, told to pursue goals within a set of constraints, allowing them to “learn” how to accomplish tasks by trial and error.  They thus display “emergent behavior,” acting in ways their human programmers neither instructed nor, in some cases, even anticipated.  The Uber’s inscrutable indecision as it hurtled towards Herzberg might at first have been taken to support the argument that tracing algorithmic flaws back to some sort of human fault or negligence is a fool’s errand.

The NTSB’s final report on the crash seriously undermines this argument, at least as it applies to this particular incident.  Instead of an erratic algorithm whose behavior can’t be explained after the fact, the NTSB highlighted specific, easily understandable and, in retrospect, obviously negligent programming decisions made by ordinary humans.

Most glaringly, the report notes that the algorithm was not programmed to understand that people sometimes jaywalk.  The system was programmed to classify objects it detected in the road and to then make assumptions about those objects’ future position based on both its record of past positions and inferences about how objects of a given type usually behave.  Pedestrians detected in crosswalks were, sensibly enough, assigned the goal of crossing the street.  But pedestrians in the road anywhere other than a crosswalk were not assigned a goal, meaning that the system could only guess where they might be headed based on its own observations of their past positions.  As the NTSB put it, “the system design did not include a consideration for jaywalking pedestrians.”[4]

Even without a goal, the system could predict object’s future path based on past observations of its position.  Unfortunately, this process too contained glaring flaws.  If the system changed its classification of an object, its data about the object’s prior position did not carry over.  In other words, each time the system changed its mind about whether Herzberg was a bicycle, an object, or a vehicle, it essentially “forgot” all it had learned about where she had been and where she was going, and started observing her position and direction of travel with a clean slate.[5]

Ironically, NTSB investigators found that the relatively mundane driver assistance systems with which the Volvo had come factory-equipped would have prevented the accident.[6]  These systems included forward collision warning, which alerts the driver of the need to brake to avoid a collision, and automatic emergency braking, which stops the car if the driver does not respond to the warning.  Such features are these days fairly commonplace on new cars.

In an article published in November in the Journal of Tort Law, I argued that fears about the tort system’s inability to make sense of collisions involving autonomous vehicles were overblown, and that existing doctrines are capable of resolving such disputes.[7]  The new details released by the NTSB support that view, as they highlight several relatively simple and (in hindsight) glaring programming errors that caused the AV to barrel into Herzberg at 39 mph despite having detected her more than five seconds earlier.

Whether these facts represent negligence or a defective design, they suggest that the autonomous vehicles that may one day shuttle us around will, like most things created by humans, sometimes malfunction.  Whether these malfunctions will someday involve emergent behavior too esoteric for us mortals to understand is an open question.  So far, the answer appears to be no.

[1] Local prosecutors have not yet ruled out the possibility of criminal charges being brought against the vehicle’s operator.

[2] Uber removed backup operators from its cars and paid insufficient attention, the NTSB noted, to “automation complacency,” a widely-known phenomenon that results from humans’ poor performance on tasks requiring “passive vigilance.” For example, investigators determined that the safety operator had made the exact same trip 73 times without incident.

[3] See, e.g., Kenneth S. Abraham & Robert L. Rabin, Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era, 105 Va. L. Rev. 127, 144 (2019); Bryan H. Choi, Crashworthy Code, 94 Wash. L. Rev. 39, 44 (2019).

[4] Ensar Becic, NTSB, Vehicle Automation Report, HWY18MH010, Nov. 5, 2019, at 12.  This has since been changed, so that Uber’s automation system now understands that “jaywalking is . . . a possible pedestrian goal.”  NTSB Report at 30.

[5] Id. at 8 (“[I]f the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories.”); id. at 13.  This too has since been fixed.  Id. at 30.

[6] David Pereira, NTSB, Vehicle Factors Group Chairman’s Factual Report, HWY18MH010, Nov. 5, 2019, at 9-10.  These features had been turned off by Uber so as not to interfere with its testing.

[7] Alexander B. Lemann, Autonomous Vehicles, Technological Progress, and the Scope Problem in Products Liability, 12 J. Tort L. 157 (2019).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.