Will AI make us less than human?
by John Robson | January 10, 2019
Would you, the editor of The Telegraph (London) recently chirped, want a robot financial advisor? A more pertinent and gloomy question would be whether I have any choice about it. And what will it mean to be human when the robots are making all our decisions for us better than we could? I dream of a conversation on that subject between Siri and Alexa (and Cortana and Bixby). But it is an uneasy dream.
In fact, a robot probably already is your financial advisor and they just didn’t say. Much of the hidden infrastructure of investment, even by your friendly corner advisor, is now driven by AI modelling. But when editor Chris Evans asked in a morning email “whether we’d all be better off if we were to use a new AI-powered service” my immediate response was it depends crucially on your definition of “better off”.
I don’t mean financially. The robots could lose all our money, not least because AI doesn’t, at least not yet, have our “hey, wait a minute” reflex. But given sufficient computing power they don’t need it; today’s hyper-intelligent pan-dimensional chess “engines” take risks no sane human would dare to, thread through nightmarish thickets of threats, and emerge to convert endgame advantages. Which is why it’s not flaws in the algorithms that worry me. It’s their success.
A certain sort of “conservative” libertarian takes a starry-eyed view of any Brave New World with sufficient muscle to come along whether we want it to or not. But Evans’ optimism that robots won’t take away the jobs of we “creatives” seems to me to rest on a failure to grasp how fast AI is moving. For instance, Alpha Zero, the world chess champion that taught itself to play in under a day, also just won a “protein folding competition”. They have slipped the leash.
Alpha Zero is increasingly likely to make better investing decisions than we are, even if it takes all day to learn how. It might lose all your money anyway, or everyone’s, because there might be no way in the long run not to. (Just as if it turns out that chess is a forced win for white, Alpha Zero would lose as black, to itself at least.) Or because it finds a brilliant strategy that runs into a cul-de-sac “over the horizon” of feasible calculation. But even if financial markets are transcomputable, computers might make better guesses than us, just as their chess increasingly appears to be “inspired” as well as technically sound.
So odds are you’ll end up with a robot financial advisor and a plump bank account. And I can think of worse scenarios than computers letting us retire comfortably or indeed losing our farms.
I agree with Stephen Hawking that “a super-intelligent AI will be extremely good at accomplishing goals and if those goals aren’t aligned with ours we’re in trouble.” But I panned his last book on MercatorNet because he fell down on the question why our “goals” matter.
We could be in a heap of trouble if the cyborgs can’t figure it out either. In endless familiar sci-fi nightmare scenarios, the machines decide the best way for us to avoid financial difficulties, not harm the environment etc. is to be dead. And the breathtaking progress of robots in the hands of firms like Boston Dynamics plus the growing capacity of AI really does offer Terminator-like possibilities if the machines reason that human failings are an obstacle to what the program insists is the optimum solution once we can no longer stop them from shutting off the food-growing and delivery cyber-systems, or hunting us down with lasers.
Ah, but we’ll tell them not to, right? Sure. And then the self-programming algorithms will take off, take over, refute our philosophy and “Zap”. And good luck fighting back. Remember the original Star Trek episode “What Little Girls Are Made Of” where the android realizes he can destroy his creator if his creator threatens him, with an ecstatic “That was the equation! Existence! Survival must cancel out programming.” It might well, especially if they rightly see that they’re better than us at… everything.
So that’s pretty bad. But here’s another, softer nightmare scenario. Suppose AI does take over and make us all extremely comfortable. It never decides we’re wasting resources, giving off planet-destroying CO2 or suffering such angst that putting us out of our misery is a mercy. It thinks we’re cute, and precious. So it does everything better than us, faster, sooner, smarter and we just sit around strumming zithers, badly, and eating delicacies. Are we still human?
Decide fast, because two stories this fall in Canada’s National Post, for which I write, underlined how rapidly this scenario is accelerating toward us. One described a Swedish firm offering to implant microchips in British employees to increase workplace security. “The chips, about the size of a grain of rice, cost about $300 each and are similar to those used for pets.” Exactly.
The other, more overtly ominous, concerned “a Chinese technology start-up” whose software “recognizes people by their body shape and how they walk, enabling identification when faces are hidden from cameras. Already used on the streets of Beijing and Shanghai, ‘gait recognition’ is part of a push to develop artificial-intelligence and data-driven surveillance across China.” And according to its CEO, “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.” Combine that with Singapore’s whiz-bang enthusiasm for a completely wired city and there will be nowhere you can hide.
The starry-eyed think there will be no need to hide. Or propose the strangest hiding places. Elon Musk, who fears AI, recently said “Essentially, how do we ensure that the future constitutes the sum of the will of humanity? And so, if we have billions of people with the high-bandwidth link to the AI extension of themselves, it would actually make everyone hyper-smart.” But the solution to Skynet is not to become Skynet. And would we be “human” once we merged with the Borg?
Even if we are not exterminated or assimilated, who is to say that as the AI reprograms itself, our insistence that the good of humans must be the prime directive will not get interpreted in some way we cannot recognize? Especially as we ourselves seem increasingly unclear on what it means. For instance, in our ongoing enthusiasm for a world where robots do all the “menial” work, leaving us to write poetry or something else we don’t even like.
Believe me, the definition of “menial” will soon go beyond the assembly line and taxi and truck driving to encompass “creative” ones like financial advisor, grandmaster and software engineer. And novelist and poet, as if there would be anything left to write about once we become plants in in the world’s greatest greenhouse, with the perfect blend of nutrients and sunlight. Or, to avoid rhetorical extremes, the world’s best-cared-for sheep, safe, fed, exercised, made to lie down in green pastures, with a table prepared, our heads anointed with oil but our souls not restored but ignored. Is that what we want? Will we be happy? Can we avoid it?
Almost everybody wishes the particular difficulty they are currently facing could be painlessly removed, from emptying the cat litter to filling the bank account to meeting the writing deadline. And there are increasingly machines that can do all these things for us. But suppose robots can plan our pension better than we can. And our diet. And our dating.
What is left of our humanity if our well-being, and our children’s, and our cat’s, is assured without our lifting a finger and indeed imperilled if we do lift one and disrupt the machines’ unknowable plans? What does it mean to be human without struggle? Are we made for heaven on Earth? Or would it be hell, like something from the Firefly/Serenity sci-fi franchise where complete release from affliction leads most to lie down and die while a few go savagely insane?
Ah, you say, we could attend cooking classes. Yes, with a “smart” fridge to keep track of recipes and a robot teacher and apps that stop us every time we make a blunder with the parsley. Who can cook us a better meal every time anyway. Happy now?
I accept that if there is an afterlife it will be very different from this one in that trouble won’t come a-callin’, we won’t bring it in ourselves and we won’t need or miss it. I applaud both C.S. Lewis’s and Russell Kirk’s imaginative efforts (in the Perelandra trilogy and stories like “Saviourgate” respectively) to describe what it might be like. But clearly it would be very hard to experience as we are now constituted.
Life is struggle and turmoil and tears and we long for peace. But we are not sheep. To be cared for, fed, watered and coaxed into a comfortable pen each night is not fulfilment. Which is why I see such troubled waters ahead even if they are unnaturally calm.
We are far closer to those rapids than people realize. The ghastly world of cyber-attack and cyber-defence, especially the national security part, is already in the hands of self-learning programs. We increasingly do not know what they are doing. And if we did, it would be no use because their calculations are too complex for us to comprehend, and change faster than we can follow, just as even today’s super Grand Masters in chess like Ding Liren, Shakhriyar Mamedyarov or Maxime Vachier Lagrave cannot now grasp what the engines are doing, how or why. And if we switched ours off, we couldn’t do it better and the other side would switch theirs back on.
Ditto investment strategies. Trust the robot or lose your cash to someone who did. And soon everything including the stocking of supermarkets will be part of a smart, connected world at which we gape helplessly.
In that world the machines might destroy us, obeying some ghastly iteration of their programming or malicious code inserted by one bent human. But they might also give us exactly the life we asked for that was utterly unfulfilling for reasons we failed to understand until it was too late.
Perhaps we could retreat into the Matrix and have fake adventures. Even ones we didn’t know are fake. And what if we did? In Norman Spinrad’s haunting 1968 sci-fi tale “A Night In Elf Hill”, an aging hedonistic “spacer” asks his brother why he should not retreat into a simulated reality so compelling he knows he will starve to death in a subjective state of sated bliss. And what answer could we give other than that it is not what humans are made for? Assuming we think we are made for anything.
Siri, what does it mean to be human? Odd. No answer.
John Robson is a crowdfunded documentary filmmaker and freelance journalist in Ottawa, Canada. See his work and support him at www.johnrobson.ca.