Is Our Ethical Approach To Autonomous Vehicles Flawed?

The potential for autonomous vehicles to do harm is considerable, but research from North Carolina State University suggests that the approach we’re taking today means that many ethical issues are not being adequately addressed.

The researchers suggest that the central flaw is that many researchers today don’t approach the issue from the perspective of someone deliberately trying to do harm with autonomous vehicles.

The researchers cite an example of an autonomous vehicle with no passengers that is about to crash into one with five people in it.  It could avoid this collision, but in doing so would hit a pedestrian.  It’s the kind of dilemma often used in ethical discussions around autonomous transportation.  Such discussions typically focus on whether the vehicle should be selfish (and thus protect itself and its cargo), or utilitarian (and protect the most people).  It’s an approach that the researchers believe presents problems of its own.

“Current approaches to ethics and autonomous vehicles are a dangerous oversimplification – moral judgment is more complex than that,” they say. “For example, what if the five people in the car are terrorists? And what if they are deliberately taking advantage of the AI’s programming to kill the nearby pedestrian or hurt other people? Then you might want the autonomous vehicle to hit the car with five passengers.”

Ignorant of motive

In other words, most of the ethical discussions taking place today fundamentally lack any insight into the motives or intent of the vehicle, or indeed the passengers.  It’s a flaw the researchers believe can be overcome via something they refer to as the Agent-Deed-Consequence (ADC) model, which can help AI-based systems make moral judgements.

The model judges morality according to three variables:

  1. Is the agent’s intent good or bad?
  2. Is the deed or action itself good or bad?
  3. Is the outcome or consequence good or bad?

This is an approach that the team believe allows for far more nuance than is ordinarily the case.  For instance, while running red lights is generally considered bad, if doing so allows the vehicle to get out of the way of an ambulance, it’s generally seen as acceptable.

“The ADC model would allow us to get closer to the flexibility and stability that we see in human moral judgment, but that does not yet exist in AI,” the researchers say. “Human moral judgment is stable because most people would agree that lying is morally bad. But it’s flexible because most people would also agree that people who lied to Nazis in order to protect Jews were doing something morally good.”

It’s not the final word on the matter by any means, and the researchers accept that more work is needed, especially in the field of human moral judgment and via vigorous testing in driving simulation studies to test the ‘ethics’ of autonomous vehicle technology, especially as their use in cyberattacks is a growing threat.  Nonetheless, it’s an interesting addition to the field.

Facebooktwitterredditpinterestlinkedinmail