- Free newsletter
- The Latest
- Topics
-
About
Let's take a second look at the ethics of driverless cars
My heart goes out to the family and friends of Elaine Herzberg, who was run over by an Uber autonomous vehicle (AV) as she was crossing the street with her bike last March 19 in Tempe, Arizona. I also feel sorry for the back-up or “safety driver”, who must have had the shock of his life. But there was little he could do. It was night and the car was travelling within speed limits. She was crossing a multi-lane road outside of the crosswalk, a mere 100 yards away. That despite its advanced radar and laser technology, the AV did not slow down or brake made Uber and AV critics smell blood. “A tremendous breach in AI ethics!” many cried.
However, this is an over-reaction and it is sorely misguided. Driver errors account for 37,000 deaths a year in the US only. AVs do not get distracted or fall asleep; neither do they get drunk or get high on drugs. They can be an enormous help especially for the disabled, for whom driving is not an option. But as we have seen, AVs aren’t perfect. Indeed, just one life lost on the road is more than enough. Yet, in due course, AVs hold the promise of delivering a safety record comparable perhaps to commercial airplanes, which have registered 0 fatalities since 2009 in the US.
More controversial is the issue of artificial intelligence (AI) ethics. Despite the publicity and the fanfare, the truth is there is no such thing. At the very least, for the same reasons that AI biology is inexistent. For there to be an AI ethics, several conditions have to be met. First, the technology has to be alive, it must have an internal, self-organizing principle of movement or “soul”. So being plugged to an electrical outlet or battery does not make the cut. AI should also be able to seek and select external raw materials by itself, transforming them organically into its own parts, like all living creatures do. This is crucial for its survival and development.
Further, AI should be capable of having an “intention”, that is, a capacity to self-direct toward an end or goal, given a range of possibilities, instead of simply being pre-programmed or following instincts. Depending on how well AI achieves this freely-chosen goal, it would then be praised or blamed, rewarded or punished. It would be a morally “good” or “evil” AI, as an overall judgment on its “AI-ness”. This is quite different from mere considerations regarding its effectiveness in performing certain functions, because it would imply each AI would have a distinctive value in itself.
What are the chances of this happening? None. So next time people try to sell you the idea of an AI ethics, tell them to go tickle themselves, if that works.
Whether the AI programmer decides on an algorithm that protects the driver and passengers in crashes instead of people on the road is an altogether different issue, of course. But that is no longer an AI ethics problem. Rather, it belongs to an NI (natural intelligence), in particular, to a warm-blooded biped also known as a human being. Although it may sound surprising, only humans do ethics.
Alejo José G. Sison teaches at the School of Economics and Business at the University of Navarre and investigates issues at the juncture of ethics, economics and politics from the perspective of the virtues and the common good. He is an editor of the recently published "Business Ethics: A Virtue Ethics and Common Good Approach” (Routledge 2018). He blogs at Work, Virtues, and Flourishing from which this article has been republished with permission.
Join Mercator today for free and get our latest news and analysis
Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.
Have your say!
Join Mercator and post your comments.