According to Martin Mose Bentzen, the benefits of autonomous vehicles can be tremendous. That said, we must determine their behavioural values. Photo: Bax Lindhardt.

Ethical autonomous vehicles

Robotteknik og automation
Before driverless cars are released into everyday circulation, they must be equipped with values and ethics so they can make the right choices in critical situations, says ethics researcher.

In the space of a few years, autonomous vehicles will be a familiar sight on our streets, and like any other road user, they will have to make lightning-fast—and sometimes fateful decisions. Driverless vehicles must therefore not only be programmed with traffic regulations and knowledge about traffic behaviour—but also ethical standards. This is the opinion of Martin Mose Bentzen, Associate Professor, DTU Management Engineering.

Ethics are necessary in order to instruct the autonomous vehicle on how to act if a situation arises where it must choose between two evils. An example could be a situation where the vehicle cannot manage to brake in time and a collision is inevitable. Should the vehicle, for example, avoid hitting a cyclist wearing a helmet—and instead hit another cyclist who is not? While the former has the greatest chance of survival, such a scenario is hardly likely to endear cyclists to helmets.

Martin Mose Bentzen, who conducts research into this type of dilemma and other ethical aspects of autonomous vehicles, has published several articles on the topic in recent years. He is investigating the ethical principles that can be programmed into the autonomous vehicles—and their possible consequences.

"The formalization of the philosophical language will culminate in a formal theory, but it is not a formal science in itself. It is a process with elements of craftsmanship, and creativity."
Associate Professor Martin Mose Bentzen

The core of his work is the formalization of ethical theories—i.e. translating the ethical theories into formal logic and a new language that makes it possible to program an autonomous system such as an autonomous vehicle—or a robot for use in nursing homes. The ethical theories may originate from old philosophers such as Immanuel Kant (1724-1804) or John Stuart Mill (1806-1873)—or from more recent texts on ethical principles.

“Translating into formal logic is a form of interpretation and clarification. Typically, philosophical theory is somewhat vague, so I adopt a more rigorous approach—and when you’ve done that, it isn’t too difficult to program on a computer. The big challenge is delving into Kant’s thoughts about ethics, for example, and finding a language that works in formal logic,” he explains.

Must follow human rules

Autonomous vehicles have become a reality as a consequence of the rapid growth in artificial intelligence and increasingly advanced hardware such as cameras and sensors. Martin Mose Bentzen also uses the broader term ‘social robotics’ because the autonomous vehicle interacts with people in real-life situations.

The robot or the vehicle must behave in accordance with human laws, rules, and values. Given the fact that they are present in traffic alongside other road users they will face ethical dilemmas which they must be programmed to handle.

“Autonomous vehicles are just one example. Other ethical dilemmas may present themselves when we consider robots in the home, in nursing homes, and in day care centres. An example is a robot that lies, stating that it will be destroyed if the elderly man fails to do his exercises. This will cause the man to do his exercises, which is both good—and perhaps also necessary for the man’s health. However, the robot has told a lie to achieve a positive outcome. Some ethical theories prohibit such actions. Others consider it permissible,” he explains.

Photo: Bax Lindhardt

Virgin research field

Martin Mose Bentzen is working closely with a group from the German University of Freiburg. As yet, they have not decided on a particular ethical theory. This field of research is still new and open, so they are investigating and presenting several different perspectives based on different theories.

“Even though formal logic provides us with a common language, we have different perceptions of the philosopher we are interpreting. Sometimes this result in different interpretations. The formalization of the philosophical language will culminate in a formal theory, but it is not a formal science in itself. It is a process with elements of craftsmanship, and creativity.”

His interest in autonomous vehicles is not least due to the fact that they are highly developed and the subject of intensive research and public attention. Fundamentally, he sees driverless vehicles as a positive step forward—one that holds great potential to improve both road safety and the environment. But the huge interest in them also poses a built-in threat. Industry’s vested interest is producing a fierce race to be the first among the major players.

“There will definitely be a huge demand, and right now manufacturers are pushing so hard that there is a risk of sending cars onto the street that aren’t sufficiently developed. Legislators will come under pressure and may perhaps allow tests that are a little too risky. It probably won’t happen in Denmark, but the industry will probably seek to enter markets with less stringent legislation.”

Public scepticism a potential stumbling block

Another decisive factor affecting the future of autonomous vehicles may be the attitudes of the general public. It only takes a few accidents where the vehicle responds differently than a human driver before people become insecure and afraid, points out Martin Mose Bentzen. Previously—in connection with technologies such as genetically modified foods, for example—public scepticism proved difficult to overcome once it had first taken hold.

“The same scenario could easily apply to some of the social robots. I hope that autonomous vehicles make it through the important and very necessary public wringer and become an accepted part of traffic on our roads. The potential benefits of autonomous vehicles are huge, but we need to explicitly define their behavioural values,” says Martin Mose Bentzen.

“When the machines are so advanced that they can learn to draw their own conclusions, there is a danger that they may learn some solutions that are creative—but ethically flawed. The robots must be regulated, especially when we’re talking about systems that interact directly with humans and are therefore potentially dangerous. There are many dilemmas to consider, which is also what makes the area ethically interesting. But the potential benefits outweigh the risks in my opinion.”

Model motorists?

Autonomous vehicles will never drive under the influence or break the speed limit to get you home in time to watch a football match. They will always maintain a safe distance on the motorway and never park illegally (unless programmed to do so).

But there is a catch to being perfect, says Martin Mose Bentzen. Autonomous vehicles must not behave in such a way as to confuse other road users. If autonomous vehicles follow every rule to the letter in traffic where others behave differently, they may present a danger to others—as well as themselves. Developers and programmers must be aware of this delicate balance.

Cultural differences must also be taken into account, as there may be key differences in traffic patterns and behaviour in Beijing, Copenhagen, and Napoli, for example.