This week the New York Times published a fascinating look at the latest iteration of Tesla’s automated driving technology, which the company calls “Full Self Driving.” Reporters and videographers spent a day riding with Tesla owner Chuck Cook, an airline pilot who lives in Jacksonville, Florida and has been granted early access to the new technology as a beta tester. What they found was, to my eye anyway, disturbing.
Mr. Cook’s Tesla navigated a broad range of city streets, selecting a route to a destination, recognizing and reacting to other cars, seeing and understanding traffic lights, and even making unprotected left-hand turns—a routine situation that autonomous vehicles struggle to handle. But the car also behaved erratically at times, requiring Mr. Cook to take over and correct its course. In one instance it veered off the street and into a motel parking lot, almost hitting a parked car. In another, it tried to make a left turn onto a quiet street but then, fooled by shade and branches from an overhanging tree, aborted the turn and ended up heading into oncoming traffic on a divided street. These incidents occurred in a single day of testing.
It is worth considering the experience of the Times reporters in the broader context of autonomous vehicle development, something the article largely fails to do. Continue reading “A Glimpse into Tesla’s New “Full Self Driving” Technology”
Flooding is the most common and most costly natural disaster in the United States, and the toll it takes is only expected to grow over the coming years. Rising sea levels, more powerful hurricanes, and more intense rainfall—all worsening thanks to climate change—will displace people from their homes and put increasing strain on the systems we use to address these risks. One of the most important such systems is the National Flood Insurance Program (“NFIP”), which has been in debt to the U.S. Treasury since 2005 and is perpetually derided as “broken.” It seems obvious that a big part of the solution to the problems ailing the NFIP (and to our problem of flood risk more generally) is to move people away from flood-prone areas, and yet the policy reforms intended to address these issues have prove extremely difficult for Congress to enact. In a new paper recently published in the Colorado Law Review, I offer some theories as to why.
A key obstacle to seemingly enlightened policy reform, I argue, is our country’s deep-seated hostility to paternalistic interventions. Drawing on the philosophical literature on paternalism, I note the key features that make such laws objectionable to many people: they seek to override individuals’ judgments about what is best for them. Even when such decisions appear to be flawed (like the choice to live in a flood-prone area, for example), they often depend on value judgments, and it is therefore hard to say that a different choice would be objectively rational. It is impossible, for instance, to weigh the emotional value of a home or neighborhood against the expected future costs of flooding in a way that produces an objectively optimal course of action, in the same way there is no objectively correct way to eat, given the emotional and cultural significance of food. Continue reading “As Our Climate Changes, What Can Be Done about Flood Risk?”
Arizona Appellate Court Revives Plaintiff’s Claim that Vehicle that Struck Her was Defective By Virtue of Not Including Autonomous Safety Feature
In recent years, highly autonomous vehicles have acquired a reputation as a technology that is perpetually just a few years away. Meanwhile, their enormous promise continues to tantalize. AVs have the potential to transform American life in a variety of ways, reducing costs both large and small. From virtually eliminating the roughly 40,000 deaths and hundreds of thousands of injuries we suffer in car accidents every year to making it possible to commute to work while sleeping, AVs are seen as enormously promising.
Against this backdrop, many torts scholars have expressed concern that imposing liability on AV manufacturers threatens to slow or even deter AV development. When AVs take the wheel, will the companies that make them also take on liability for whatever crashes they can’t avoid? AV development also raises the possibility—much less commonly noticed—of new liability for manufacturers of conventional vehicles. If AVs are significantly safer, will courts and juries come to see conventional vehicles as defective? According to a recent Arizona appellate court opinion, the answer is… maybe so.
Continue reading “As We Approach our Autonomous Future, Will Products Liability Law Hold Us Back or Shove Us Forward?”
NTSB’s Final Report on Pedestrian Fatality Involving an Uber AV Highlights Obvious Programming Missteps
On a dark street in Tempe, Arizona just before 10 p.m. on March 18, 2018, an Uber vehicle being tested in autonomous mode hit and killed a pedestrian. This was the first pedestrian fatality involving an autonomous vehicle, and it triggered a media firestorm that caused Uber to suspend its autonomous vehicle program for nine months as it worked with the NTSB to understand the causes of the crash. With the adoption by the NSTB of its final report on the crash on November 19, that work is now complete.
The NTSB’s final report paints a vivid picture of programming and human missteps that belies the argument commonly advanced in legal scholarship about AV liability — that crashes involving AVs will be impossible for the judges, juries, and doctrines that make up our current system of tort law to “understand.” Indeed, the errors that led to the crash were all too simple. Continue reading “Autonomous Vehicle Malfunctions May Not Be So Complicated After All”