What if software could suffer?

A thought experiment proves that the reasoning for animal rights is on the wrong track.
Michael Cook | Jun 3 2014 | comment  



Most bioethics deals with nitty-gritty situations like surrogate mothers, stem cells, abortion, assisted suicide, or palliative care. After all, the “bio” in bioethics comes from the Greek word Bios, meaning physical life. Historically the field has dealt with the ethical dilemmas of dealing with blood and guts.

But there is a theoretical avant garde in bioethics, too. It’s a bit more like science fiction than emergency room dramas. Theoretical bioethics tries to anticipate ethical issues which could arise if advanced technology becomes available. There are always a lot of ifs – but these are what bring joy to an academic’s heart.

The other day an intriguing example lobbed into my in-box. Writing in the Journal of Experimental & Theoretical Artificial Intelligence, Oxford bioethicist Anders Sandberg asks whether software can suffer. If so, what are the ethics of creating, modifying and deleting it from our hard drives?

We’re all familiar with software that makes us suffer because of corrupted files and crashes. I have suffered a lot at the hands of software and if I had my way, it would have paid for its crimes with the death penalty. But software which says “ouch”? Software which cries, “If you prick us, do we not bleed?”

This is a wee bit more plausible than it sounds at first. There are at least two massive “brain mapping” projects under way. The US$1.6 billion Human Brain Project funded by the European Commission is being compared to the Large Hadron Collider in its importance. The United States has launched its own US$100 million brain mapping initiative. The idea of both projects is to build a computer model of the brain, doing for our grey matter what the Human Genome Project did for genetics.

Theoretically, the knowledge gained from these projects could be used to emulate the brains of animals and humans. No one knows whether this is feasible, but it is tantalising possibility for scientists who are seeking a low-cost way to conduct animal experiments.

Say a researcher wanted to investigate the side-effects of a new drug. He could input its chemical composition into software which emulates a hamster brain and the software would output levels of pain, nausea, discomfort or dizziness.

If it works with a hamster brain, why not a human brain? The brain models would not be as sophisticated as Johnny Depp’s in the recent film Transcendence (which tanked at the box office). As he is about to die, his character’s consciousness is uploaded onto the internet with catastrophic consequences. No, just a low-end Homer Simpson kind of brain would do.

So the software emulating the brain would be useful as long as it could suffer.

Bing!!!!

As soon as the word “suffering” is mentioned, bells go off for bioethicists. Preventing the suffering of conscious creatures is the foundation of animal liberation. As far back as 1789 the father of utilitarianism, Jeremy Bentham, argued that animals, like humans, should be protected from suffering. “A full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day or a week or even a month, old. But suppose they were otherwise, what would it avail?” he said, anticipating by 180 years the books of his contemporary disciple, Peter Singer. “The question is not Can they reason?, nor Can they talk?, but Can they suffer?”

Hence, building on widely-accepted ethical guidelines about animal experimentation, it could be argued that tweaking software to emulate pain should be avoided because unnecessarily causing pain to a conscious being is wrong.  

How would we know whether the software is suffering? That is a philosophical conundrum; how do we even know whether an animal is suffering?

Sandberg believes that the best option is to “assume that any emulated system could have the same mental properties as the original system and treat it correspondingly”. In other words, software brains should be treated with the same respect as the experimental animal; virtual mistreatment would be just as wrong as real mistreatment in a laboratory.

How about the most difficult of all bioethical issues, euthanasia? For animals, death is death. But if there are identical copies of the software, is the emulated being really dead? On the other hand, would we be respecting the software’s dignity if we kept deleting copies?

Even trickier problems crop up with emulations of the human brain. What if a virus turns software schizophrenic or anorexic? “If we are ethically forbidden from pulling the plug of a counterpart biological human,” writes Sandberg, “we are forbidden from doing the same to the emulation. This might lead to a situation where we have a large number of emulation ‘patients’ requiring significant resources, yet not contributing anything to refining the technology nor having any realistic chance of a ‘cure’.”

And what about software “rights”? Could the emulations demand a right to be run from time to time? How will their privacy rights be protected? What legal redress will they have if they are hacked? Should we have fun runs and cake stalls for abused and abandoned software?

The imaginative dilemmas projected by Sandberg and his fellow futurists cannot be falsified because they haven’t happened yet. My bet is that they will never happen.

But, in their own way, these avant garde ruminations constitute a useful thought experiment. If – a big if --our respect for beings should be proportional to their consciousness, as Bentham and Singer contend, then we stumble into huge (and unnecessary) dilemmas.

Radical animal rights activists claim not only that primates and dogs should not be experimented upon, but even animals with lower degrees of consciousness like mice. Any being that can suffer deserves protection and respect.

The same reasoning leads, as Sandberg demonstrates, to the notion of suffering software and enforceable rights for software. It is this reductio ad absurdum which ought to make us question whether we have properly understood the notion of “animal rights”.

Michael Cook is editor of MercatorNet.  



This article is published by Michael Cook and MercatorNet.com under a Creative Commons licence. You may republish it or translate it free of charge with attribution for non-commercial purposes following these guidelines. If you teach at a university we ask that your department make a donation. Commercial media must contact us for permission and fees. Some articles on this site are published under different terms.

comments powered by Disqus
Follow MercatorNet
Facebook
Twitter
MercatorNet RSS feed
subscribe to newsletter
Sections and Blogs
Harambee
PopCorn
Conjugality
Careful!
Family Edge
Sheila Reports
Reading Matters
Demography Is Destiny
Bioedge
Conniptions
Connecting
Above
Vent
From the Editor
Information
contact us
our ideals
our People
our contributors
Mercator who?
partner sites
audited accounts
donate
advice for writers
privacy policy
New Media Foundation
L1 488 Botany Rd
Alexandria NSW 2015
Australia

editor@mercatornet.com
+61 2 8005 8605
skype: mercatornet

© New Media Foundation