Can we teach machines to be moral?

By now you may be used to asking your phone or Siri or Alexa questions and expecting a reasonable answer. The dream of Alan Turing in 1950 that one day, computers might be powerful enough to fool people into thinking they were human is realized every time someone dials a phone tree and thinks the voice on the other end is a human when it's actually a computer.

The programmers setting up artificial-intelligence virtual assistants such as Siri and human-sounding phone trees aren't necessarily trying to deceive consumers. They are simply trying to make a product that people will use, and so far they've succeeded pretty well. Considering the system as a whole, the AI part is still pretty low-level, and somewhere in the back rooms there are human beings keeping track of things. If anything gets too out of hand, the back-room folks stand ready to intervene.

But what if it was computers all the way up? And what if the computers were by some meaningful measure, smarter overall than humans? Would you be able to trust what they told you if you asked them a question?

This is no idle fantasy. Military experts have been thinking for years about the hazards of deploying fighting drones and robots with the ability to make shoot-to-kill decisions autonomously, with no human being in the loop. Yes, somewhere in the shooter robot's past there was a programmer, but as AI systems become more sophisticated and even the task of developing software gets automated, some people think we will see a situation in which AI systems are doing things that whole human organizations do now: buying, selling, developing, inventing, and in short, behaving like humans in most of the ways humans behave. The big worrisome question is: will these future superintelligent entities know right from wrong?

Nick Bostrom, an Oxford philosopher whose book Superintelligence has jacket blurbs from Bill Gates and Elon Musk, is worried that they won't. And he is wise to do so. In contrast to what you might call logic-based intellectual power, in which computers already surpass humans, whatever it is that tells humans the difference between right and what is wrong is something that even we humans don't have a very good handle on yet. And if we don't understand how we can tell right from wrong, let alone do right and avoid wrong, how do we expect to build a computer or AI being that does any better?

In his book, Bostrom considers several ways this could be done. Perhaps we could speed up natural evolution in a supercomputer and let morality evolve the same way it's done with human beings. Bostrom drops that idea as soon as he's thought of it, because, as he puts it, ""Nature might be a great experimentalist, but one who would never pass muster with an ethics review board—contravening the Helsinki Declaration and every norm of moral decency, left, right, and center." (The Helsinki Declaration was a document signed in 1964 that sets out principles of ethical human experimentation in medicine and science.)

But to go any farther with this idea, we need to get philosophical for a moment. Unless Bostrom is a supernaturalist of some kind (e. g. Christian, Jew, Muslim, etc.), he thinks that humanity evolved on its own, without help or intervention, and is a product of random processes and physical laws. And if the human brain is simply a wet computer, as most AI proponents seem to believe, one has to say it has programmed itself, or at most that later generations have been programmed (educated) by earlier generations and life experiences. However you think about it in this context, there is no independent source of ideal rules or principles against which Bostrom or anyone else could compare the way life is today and say, "Hey, there's something wrong here."

And yet he does. Anybody with almost any kind of a conscience can read the news or watch the people around you, and see stuff going on that we know is wrong. But how do we know that? And more to the point, why do we feel guilty when we do something wrong, even as young children?

To say that conscience is simply an instinct, like the way birds know how to build nests, seems inadequate somehow. Conscience involves human relationships and society. The experiment has never been tried intentionally (thank the Helsinki Declaration for that), but a baby reared in total isolation from human beings—well, something close to this has happened by accident in large emergency orphanages, and the baby typically dies. We simply can't survive without human contact, at least right after we're born.

And dealing with other people allows for the possibility of hurting others, and I think that is at least the practical form conscience takes. It asks, "If you do that terrible thing, what will so-and-so think?" But a well-developed conscience keeps you from doing bad things even if you were alone on a desert island. It doesn't even let you live with yourself at peace if you've done something wrong. So if conscience is simply a product of blind evolution, why would it bother you if you did something that never hurt anybody else, but was wrong anyway? What's the evolutionary advantage in that?

Bostrom never comes up with a satisfying way to teach machines how to be moral. For one thing, you would like to base a machine's morality on some logical principles, which means a moral philosophy. And as Bostrom admits, there is no generally accepted system that most moral philosophers agree on, which means most moral philosophers are wrong about morality.

Those of us who believe that morality derives not from evolution, or experience, or tradition, but from a supernatural source that we call God, have a different sort of problem. We know where conscience comes from, but that doesn't make it any easier to obey it. We can ask for help, but the struggle to accept that help from God goes on every day of life, and some days it doesn't go very well. And as for whether God can teach a machine to be moral, well, God can do anything that isn't logically contradictory. But whether he'd want to, or whether he'd just let things take their Frankensteinian course, is not up to us. So we had better be careful.

Karl D. Stephan is a professor of electrical engineering at Texas State University in San Marcos, Texas. This article has been republished, with permission, from his blog Engineering Ethics, which is a MercatorNet partner site. His ebook Ethical and Otherwise: Engineering In the Headlines is available in Kindle format and also in theiTunes store.  


Join Mercator today for free and get our latest news and analysis

Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.

Be the first to comment

Please check your e-mail for a link to activate your account.