Friday, February 09, 2018

The ethics of self-driving cars (revisited again)

Pursuing the ongoing debate over the hoary problem of how to program autonomous cars ethically, a group of Italian researchers have proposed a possible practical solution to the conundrum that most drivers believe that self-driving cars should take a utilitarian approach (i.e. they should seek to maximize the common good, even if that means sacrificing the lives of the car's passengers) while, at the same time, a sizeable majority also say that they would not want to drive in an autonomous car if it was set to kill the passengers rather than a group of pedestrians.
This solution involves an "ethical knob", which the driver/owner of the car would set themselves. This know woukd have three possible settings:

  1. Extreme altruism (the car would always choose the lives of pedestrians and others over the lives of the car's occupants).
  2. Utilitarian (the car would make logicsl decisions based on maximizing the good of everyone involved).
  3. Extreme egotism (the car would protect its passengers at all costs).

The researchers suggest that driver-owners should be deterred from the egotistical option by means of insurance premiums, with those who choose altristic driving paying a much lower premium than those who opt for egotism.
This is certainly an interesting idea, and it takes the ethical onus off car manufacturers, placing it squarely on the shoulders of the human "driver", which is effectively what happens in the real world at the moment. However, I am not sure how well it would work in practice, and I worry about the ramifications of having a one-time decision take away the flexibility of reacting to a specific situation.
It would certainly bring home to drivers/owners the fact that operating a potentially lethal mechine like a car does come with ethical considerations. And it also explicitly addresses the issue of insurance on autonomous cars, something I have rarely seen examined.

No comments: