As self-driving cars become ever more sophisticated and reliable (despite the odd hiccup from time to time), the era of the autonomous car cannot now be far off. However, an article I read recently reminded me of some of the more thorny ethical issues involved.
For example, we often forget but we humans make ethical decisions all the time as we drive. These are generally of the less drastic and "easier" type, like "should I let a car in if it will improve the flow of traffic, even if it holds me personally up?" or "should I run over that dog, rather than brake or swerve sharply and risk causing an accident?"
But, in the absence of a human agent, an autonomous vehicle would also have to make such split-second decisions, and by the nature of things such decision-making would need to be preprogrammed into the car's software. Artificial intelligence is developing apace, but a car or a computer is still not capable of making those kinds of decisions unaided. It represents a whole level of added complexity over and above relatively simple mechanical things like self-correcting if a vehicle drifts out of its lane, braking automatically if an object is in its way, or warming of vehicles in a driver's blind spot.
And then, of course, there are the even more fraught moral decisions that even humans have difficulties with, such as whether to swerve to avoid a pedestrian even if it puts the driver's own life in danger, or what to do when the choice is between five unknown pedestrians or five known passengers in a car. A self-driving car would need to have guidelines for those kinds of decisions too, and manufacturers are already encountering mixed messages among its users: people want cars to minimize total harm, but at the same time they don't want cars that might diminish their own safety. It's not a trivial problem.
So what kind of ethical and moral choices should be programmed into a car? What should the standards be, and who should be responsible for setting them? Should standards be set nationally or internationally? With this in mind, the US government has recently appoint a committee of transportation advisors with the remit of coming up with standards.
There is also a MIT-initiated website called Moral Machine, which describes itself as "a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars", and which provides various ethical transportation scenarios involving self-driving vehicles, passengers, pedestrians, animals, etc, and asks for users to choose their preferred outcomes from a set of choices. The scenarios include decisions taking into account whether the victims are male or female; whether they are old or young; whether they are flouting the traffic light laws or not; whether they are homeless, pregnant, overweight, criminals; etc, etc. It is already yielding some interesting results, including some geographical difference between attitudes in North America and elsewhere in the world. Fascinating stuff!
There is also a MIT-initiated website called Moral Machine, which describes itself as "a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars", and which provides various ethical transportation scenarios involving self-driving vehicles, passengers, pedestrians, animals, etc, and asks for users to choose their preferred outcomes from a set of choices. The scenarios include decisions taking into account whether the victims are male or female; whether they are old or young; whether they are flouting the traffic light laws or not; whether they are homeless, pregnant, overweight, criminals; etc, etc. It is already yielding some interesting results, including some geographical difference between attitudes in North America and elsewhere in the world. Fascinating stuff!
No comments:
Post a Comment