Is AI ready to make life and death decisions?

People’s moral compass rarely point in the same direction. The direction can also change according to the situation we are in, how quickly we need to make a decision and our role in the situation. This is the conclusion in a new comprehensive study by a group of researchers in the fields of consumer behaviour and behavioural economics at Aarhus BSS raises the question: How do we transfer human morality to artificial intelligence?

15.11.2019 | MICHAEL SCHRØDER

PHOTO: Franck V/Unsplash

While you are reading this, consider the following questions:

Imagine two types of autonomous vehicles, each approaching a fatal situation that will result in the death of either pedestrians or the passengers in the car. One of the autonomous vehicles is programmed to protect itself, its owner and its passengers at all costs. The other one is programmed to protect as many people as possible regardless of the consequences. Which car would you prefer to be in? Which car would you prefer your neighbour to drive around in?    

Utilitarianism:

Philosophical direction and moral doctrine, which as a form of consequentialism implies that one should always act in a way that ensures the greatest good, i.e. consequences that are better than the consequences of all other possible actions in a given situation, and benefit the greatest number of people as possible.

Source: Gyldendal’s Great Danish Encyclopedia

Previous research in this field has shown that most people are inclined to believe that autonomous vehicles should be programmed to protect as many people as possible. Within philosophy, this moral approach is called utilitarianism (see fact box).

However, the new study challenges this moral approach by conducting a number of experiments where participants are assigned different perspectives – as either pedestrians or passengers in the car –  as well as time constraints.    

"We cannot simply assume that moral decisions that we make intuitively are the ones that we would ultimately go for"

 

Darius-Auriel Frank, PhD student, Department of Management, Aarhus BSS

Cannot be trusted

According to the study, both time and role may affect our morality when dealing with dilemmas. The study was conducted among approx. 3,000 individuals in the US and Denmark, distributed across seven dilemma experiments in relation to autonomous vehicles.

“Our study shows that moral decisions aren’t as simple as we thought. We cannot simply assume that moral decisions that we make intuitively are the ones that we would ultimately go for,” says Darius-Aurel Frank, PhD student at the Department of Management, Aarhus BSS, and lead author of the study, which was published in the open access journal Scientific Reports published by Nature Research.

He conducted the study together with Associate Professors Polymeros Chrysochou and Panagiotis Mitkidis from Aarhus BSS and Professor Dan Ariely from Duke University, USA. Together they have formed an interdisciplinary research group working on questions about ethical and trustworthy development of artificial intelligence. Now they are raising the question: How should these variations in human morality be addressed in the design of artificial intelligence?    

Win-win

There is no point in discussing whether artificial intelligence  –  known as AI  – is upon us. AI is already here whether we like it or not, and AI will be increasingly important in the decades to come along with the growth of the fourth industrial revolution. 

The idea behind AI is actually a win-win situation for all of us. Machines are able to think faster and more extensively than humans will ever be capable of, so why not just leave burdensome and difficult decisions to the machines, relax and enjoy the increased efficiency in decision-making power?

“Because we don’t know what ethical principles artificial intelligence should be trained to apply. That is a very important part of the equation,” says Darius-Aurel Frank.

The Trolley-dilemma

When it comes to decisions involving elements of morality and of what is right or wrong, artificial intelligence fails. AI must be trained to make moral decisions, and the only input available is human morality.

Our point of departure is a well-known ethical dilemma from the world of moral psychology, the so-called Trolley dilemma (see fact box). A number of dilemmas have been set up in different combinations inspired by the open source programme Moral Machines (see fact box). The Moral Machine design was used in the study so that results may be compared.    

The Trolley dilemma:

Classic, ethical dilemma introduced by the English philosopher Philippa Foot in 1967.

The dilemma is this: You are standing by a tram track and see a runaway trolley heading down the tracks towards five railway workers who do not have time to move out of the way. You notice a switch that you can pull to divert the trolley onto a different track that only has one worker on it.

Would you change the direction of the trolley, and thus save five workers at the expense of one?

In a different version of the Trolley dilemma, you have the opportunity to push a very heavy man off a footbridge and onto the tracks below to stop the trolley and save the five workers. In return, you kill the heavy man. What would you do?

If you do nothing, the five workers will be killed in both cases.

Source: The Conversation

The concept of all dilemmas was the same: An autonomous vehicle is heading down a street that a pedestrian is about to cross. The car is unable to stop before it either hits the pedestrian(s) and kills them or changes lanes and hits a concrete block on the other side of the road thus killing the passenger(s) in the car.    

Sympathy for the pedestrian

Participants in the experiments must now choose whether the autonomous vehicle should continue straight or change lanes. The participants were first divided into two groups to examine if role and time would affect their decision: One group had to respond within five seconds (intuitive decision), and another group had to respond within 30 seconds (deliberate decision). Prior to this, both groups had been randomly divided into three role groups: A pedestrian group, a passenger group and a control group, which observed the situation as outsiders.

Moral Machine:

An open source programme developed by researchers at MIT, which allows anyone to try out different dilemmas in connection with autonomous vehicles.

The programme was developed after researchers Jean Francois Bonnefon, Azim Sharif and Iyad Rahwan published a survey in a 2015 scientific article on moral dilemmas in connection with autonomous vehicles

Source: Moral Machine

The result of this first experiment was that in most cases, the pedestrian escaped unharmed from the situation while the passenger was sacrificed. However, there were significant differences in people’s moral decisions, depending on the perspectives of the participants and how long they had to respond.

In all three perspectives, the more time the participants had to answer, the more likely they were to sacrifice the pedestrian. The intuitive decision resulted in 21.5 per cent pedestrian deaths on average, while the deliberate decision killed 36.5 percent of pedestrians on average.

The difference between the perspectives was smaller, but still significant. The pedestrian group sacrificed the pedestrian in 22.8 percent of the cases, while the passenger group sacrificed the pedestrian in 32.4 per cent of the cases. The control group sacrificed the pedestrian in 30.7 percent of cases on average.    

Variants of the same theme

In the following experiments, the researchers changed the composition of the dilemmas while keeping the manipulations constant.

In one case, they added more passengers to the car, which resulted in a significant increase in the number of saved passengers. At the same time, the lone pedestrian was sacrificed in almost twice as many cases as before. The largest fluctuation was found between the intuitive and deliberate decision in the pedestrian group. While only 4.3 per cent of participants in the pedestrian group were intuitively prepared to sacrifice the pedestrian, the number increased all the way up to 60 per cent when the decision was deliberate.

Control or no control

In subsequent experiments, the researchers tested how participants reacted to a number of situational factors: when the pedestrian violated the traffic regulations, when there were more passengers or more pedestrians, and when there were children among either the pedestrians or passengers.

In all cases, the hypotheses were confirmed: People are prone to punish offences, the utilitarian moral code is strong, and children’s lives are held in higher regard than those of adults. However, one result took the researchers by surprise.

“In one of the experiments, we moved the lone passenger in the car from the front seat to the back seat. This resulted in a significantly higher number of saved passengers at the expense of pedestrians in both the pedestrian and the passenger role. It suggests that participants in the experiment still believe that a passenger in the front seat has more control over the situation even when it comes to autonomous vehicles. This is interesting because it seems like people haven’t gotten used to the entire concept of autonomous vehicles quite yet,” says Darius-Aurel Frank.

Further research

The researchers conclude that there is much to be discussed for politicians, philosophers, researchers and not least designers of artificial intelligence.

An algorithm that considers a utilitarian moral doctrine apparently is not enough. Something more needs to be considered. Exactly what to incorporate is still up for discussion, and for that reason, the researchers recommend that further research should be conducted into how moral decisions are made and what influences them.

“The presently studied dilemmas are too simple to fully cover the contemporary challenges autonomous vehicles are facing. For example, the odds of survival would never be the same as assumed in the Trolley dilemma. More realistic ethical decisions with more realistic outcomes would be an obvious field for future research,” says Darius-Aurel Frank.

Facts:

Read the entire scientific article

PhD student Darius-Aurel Frank, Department of Management, Aarhus BSS