In September of 2019, an anonymous Spanish highway became the road to the future. Twenty kilometers of the Corredor Mediterráneo, which links Barcelona with the French border, became the testing ground for analyzing just how autonomous vehicles and traditional cars con co-exist. The project, which was called Infra Mix, was financed by the European Union, and enjoyed the participation of the Highway Innovation Section of the Grupo Abertis. It ended in May of 2020.

 

Roads are preparing for the arrival of the self-driving car. And so are people, though to a lesser degree. According to a study by the Automobile Association of the United States, in 2016 three out of every four drivers said they would be afraid to travel in an autonomous car. For its part, legislation is being updated to accommodate a reality that is not yet complete. Cars are already making parking and emergency braking functions automatic. But the totally autonomous car is a long way from being a reality on our highways. And yet already it is posing moral dilemmas.

self driving electronic computer cars on road, 3d illustration

It’s known as the trolley problem and it posed a famous moral situation. Suppose you are the driver of a trolley. From a distance of 100 meters you see five people tied to the tracks. If you don’t do anything, you’re going to run over them. Fortunately, there’s an exit rail before reaching them. But there is somebody tied to it. What do you do? Cause greater harm by doing nothing, or less harm by taking action? Faced this moral dilemma: how would you act? What ethical code should Artificial Intelligence follow? Or what would be the ethics of the autonomous car? 

 

The trolley problem was most used in studying criminal law. But in recent years, experts in the ethics of artificial intelligence are bringing it back. And if we replace a trolley with an autonomous car, the possibilities and moral dilemmas multiply. What if at one side of the road there are two old ladies, and on the other a child? And a fat man versus a pregnant woman? Or an alcoholic beggar and a lady doctor? Could self-driving cars be ethical?

 

All these variations are complicated for a human being to resolve on the spot, but it’s even more difficult to reflect about them, and then program them for the artificial intelligence system of a machine. Determining who to save and who to harm, what ethics or morals to apply…  Germany is the only country that has approved an ethical code  for self-driving cars, which determines how they should behave in the face of an unavoidable accident where lives are at stake.

 

That code declares: “In situations where an accident is unavoidable, any classification by personal characterisitics (age, sex, physical or mental constitution) is strictly prohibited.” This norm (the ninth out of ten in a quite logical code), is the only one that differs from the results of a study about the subject published in Nature a few years ago. The study posed dozens of different dangerous situations, putting the participants in the skin (or rather the chassis) of a self-driving car. And it asked them to choose the lesser of two evils in a series of situations in which they put their code of ethics to the test. The results of the study provided a measure of human morality.

 

The experiment is based on the results of a game played by more than 2 million people in dozens of countries (it can be played here). “We saw that there are three elements that people tend to approve most,” said Edmond Awad, a researcher at the Massachusetts Institute of Technology (MIT) and principal author of the study. People try to save people ahead of animals, the more people the better, and children over old people (the first difference with the German normas). 

Apart from these three universal moral decisions, the research shows specific preferences according to the personal characteristics of participants: people prefer to save a doctor ahead of a beggar, or an athletic person over someone who’s fat (another difference with the German code).

 

Thanks to geolocation, the study could determine certain regional biases. For example, Asians tended to save old people more than Westerners did. In Europe and the United States, athletes were favored over obese people. In southern countries, women are saved more than men. And in the countries with the greatest social inequality, there is a preference for saving a pedestrian who looks like an executive over anonymous pedestrians, especially those who look like beggars.

 

The results of these studies say more about us than about road safety, but they stimulate interesting debates about questions that must be addressed. Nevertheless, many experts in technology and artificial intelligence stress this: what may be intriguing  from a moral point of view may not be so from a technological position. Autonomous cars are still not on our roads. And when they do come to be used, the technology may have advanced to the point that there may no longer be any moral dilemmas. To the point where it will be possible to save everyone.

Artículos relacionados