Self-Driving Cars: Liability and Ethical Issues

selfdrivepart-3

You’re driving on a mountain road, alone in your car. On one side there is a sheer rock wall, on the other a steep drop off. Two pedestrians step out in front of you. You cannot stop in time. The only way to avoid them is to swerve and plunge over the drop off. What do you do?

This unfortunate scenario is highly unlikely, and not something that we worry about in day-to-day life. It is, however, the type of scenario that designers of self-driving cars do need to worry about. Should the car take one life (you, the driver) to save two lives (the pedestrians)? What, in general, are the parameters for the car’s behavior when it foresees an avoidable accident? How much damage or risk to the self-driving car or its occupants should be incurred to minimize damage or risk to others? When an accident occurs, will the car’s designers become liable for the decisions they made or the contingencies they failed to anticipate? The safety of self-driving cars, meaning their ability to avoid accidents, is an emerging concern, and it will be complicated by issues such as these.

Safety of Self-Driving Cars

On May 7, 2016, a semi-truck turned in front of a Tesla Model S that was under Autopilot control. The Autopilot apparently interpreted the truck’s trailer as an overhead sign and did not apply the brakes. The car hit the truck and the Tesla driver did not survive.

On June 30, 2016, Tesla responded in their blog:

“We learned yesterday evening that NHTSA is opening a preliminary evaluation into the performance of Autopilot during a recent fatal crash that occurred in a Model S. This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles.”

Tesla’s response is that, while this was an unfortunate accident, looking at the big picture, Autopilot is safer than human drivers and will save lives. Perhaps this is true. Tesla compared their Autopilot fatality rate with the overall US fatality rate, so I do not consider their comparison to be definitive. Fatality rates vary greatly as you drill into the data. It would be interesting to compare the Autopilot fatality rate with the rate for the older, wealthier drivers who are more likely to buy a Tesla, as we know that teenage drivers are almost four times as likely to get into an accident than drivers aged 30 to 69. Further, because self-driving cars are relatively new, there are limitations in the data. Tesla’s accident rate is based on a single accident. Another fatality tomorrow would double the accident rate.

However, if self-driving cars are not currently safer than cars driven by human drivers, at some point they will be. The issue that we face now is that we are in a transition where a developing technology is in the hands of drivers who may be unfamiliar with the its limitations and may place too much trust in it. (I have previously written about the risks of self-driving cars and some of the human factors issues that arise self-driving technologies are rolled out.)

Liability

Accidents are going to continue to occur, and will increasingly involve self-driving cars as the technology proliferates and automakers will face liability if the vehicle was, to whatever extent, in control.

Some automakers are already assuming this liability. Volvo president Håkan Samuelsson announced recently that they will accept full liability for accidents that occur whenever “one of their vehicles is in autonomous mode” (Volvo, 2016).

Volvo’s Pilot Assist is currently a Level 2 autonomous driving system, meaning that at least two primary control functions are automated and work together to share control with the driver.  It is not self-driving. Volvo’s web site currently warns:

“Pilot Assist is an aid which cannot handle all traffic, weather and road conditions. The driver must always be observant with regard to the prevailing traffic conditions and intervene when Pilot Assist is not maintaining a suitable speed or suitable distance. Pilot Assist must only be used if there are clear lane lines painted on the road surface on each side of the lane. All other use involves increased risk of contact with surrounding obstacles that are not detected by the function. The driver always bears responsibility for how the car is controlled as well as for maintaining the correct distance and speed, even when Pilot Assist is being used.”

So, it apparently remains to be seen what Volvo’s assumption of liability actually means. It may just be an acknowledgement of reality—Volvo’s realization that they are going to be held liable regardless.

Liability for autonomous vehicles does not represent new legal territory. According to the University of Washington’s Technology Law and Policy Clinic, “Product liability theories are highly developed, given the advance of technology in and out of cars for well over a century, and are capable of covering autonomous vehicles”(Harris, 2015). Automotive safety technology is evolving, and today’s self-driving technologies are an evolutionary step from existing technologies such as anti-lock brakes, cruise control, blind spot monitoring, and lane keeping assistance. Liability law has kept in step with these improvements and should continue to remain in step as technologies continue to evolve. Within this framework the automaker may assume more liability, and the customer will still be liable for driving errors, which now include misuse of autonomous driving systems.

One of the changes that I am seeing is an increasing need for human factors expertise in the courtroom to deal with the issues that will arise, regarding likelihood and effects of driver distraction and usability of automotive technology. These issues will loom large in assessing the liability associated with the human and the machine (the company that developed the machine, actually) in real-world situations.

Ethics

Any science fiction fan knows Isaac Asimov’s Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Self-driving cars, which in some respects can be considered similar to robots as they become increasingly autonomous, may have to deal with a situation where there is no way to avoid injuring a human being, and must then decide how to minimize injury, complicating application of the First Law when a human can share the robot’s fate. This article began with such an example—do you drive your car off a cliff to avoid hitting a pedestrian?

A recent study, reported in The Big Think (Ratner, 2016), reveals an interesting, but understandable paradox:

People would prefer to minimize casualties, and would hypothetically rather have the car make the choice to swerve and harm one driver to avoid hitting 10 pedestrians. But the same people would not want to buy and drive such a vehicle. They wouldn’t want their car to not have their safety as the prime directive…. 76% of respondents thought it more moral for a self-driving car to sacrifice one passenger over 10 pedestrians. But if they were to ride in such a car, the percentage dropped by a third. Most of the people also opposed any kind of government regulation over such vehicles, afraid that the government would essentially be choosing who lives and dies in various situations.

A fascinating TedEd video postulates an interesting and not unlikely scenario. You are following a truck, and are boxed in by an SUV on one side and a motorcycle on the other. Aheavybox falls off the truck that is in front of you. You cannot stop and must swerve one way or the other. Do you:

  • Swerve into the motorcycle, minimizing the danger to yourself by hitting the lightest object, but putting the motorcyclist at great risk
  • Swerve into the SUV, increasing the danger to yourself by hitting a heavier object, but minimizing the danger to others (because the passengers in the SUV are better protected than the motorcyclist)
  • Hit the box head-on, maximizing the danger to yourself but putting no one else in danger

Many drivers would probably swerve one way or the other. But the TedEd video makes a key distinction between this action and whatever an autonomous car would do: if the driver is in control of the vehicle, that swerve is a reaction that did not incorporate an in-depth analysis of the situation; it was an instant decision that unfortunately put others in danger. But if a self-driving car swerved, that swerve was the result of deliberate decision made while the car’s software was developed and presumably based on careful consideration. If the car hits the SUV or motorcyclist, any injuries caused would be in a sense premeditated. The victims did nothing wrong, but the car essentially targeted them to protect others as a result of a decision made by a large, cash-rich corporation. What are the liability issues in this situation? Or, if the automaker sidesteps these issues does not build in accident avoidance algorithms, could they be considered liable for failing to build a safe product?

Other issues can arise. What if the car is programmed to prefer hitting a vehicle it deems at-fault (for example, if a vehicle pulls out in front of you, the car hits it rather than swerving into an “innocent” vehicle)? Is the car, as the TedEd video puts it, dealing out “street justice” – potentially enforcing the death penalty for a traffic infraction?

I don’t have the answer to these questions. Ethicists, software designers, regulators, and legal experts will need to grapple with them. But I do foresee these issues coming into play as companies develop algorithms for self-driving cars and also in the courtroom. Mature self-driving technology will offer immense benefits but, like many technologies, it can sometimes fail, it can be used improperly, and, even if properly and ethically designed, can still do harm.

References

  • Harris, M. (2015, October 12). Why You Shouldn’t Worry About Liability for Self-Driving Car Accidents. Retrieved from IEEE Spectrum: http://spectrum.ieee.org/cars-that-think/transportation/self-driving/why-you-shouldnt-worry-about-liability-for-selfdriving-car-accidents
  • Insurance Institure for Highway Safety. (2016, February). General Statistics. Retrieved July 15, 2016, from iihs.orh: http://www.iihs.org/iihs/topics/t/general-statistics/fatalityfacts/state-by-state-overview
  • Ratner, P. (2016, June 26). Is It Ethical to Program Robots to Kill Us? Retrieved from bigthink.com: http://bigthink.com/paul-ratner/would-you-ride-in-a-car-thats-programmed-to-kill-you
  • Volvo. (2016, August 18). Concept 26: Introducting a New Symbol of Automotive Freedom. Retrieved from volvocars.com: http://www.volvocars.com/intl/about/our-innovation-brands/intellisafe/intellisafe-autopilot/c26

 


Dr. Craig Rosenberg is an entrepreneur, human factors engineer, computer scientist, and expert witness. You can learn more about Craig and his expert witness consulting business at www.ui.expert