Artificial Intelligence going rogue: Oppenheimer returns

Even restrained advocates of tech caution that equating WMD with rogue AI is alarmist; the former is exclusively destructive and deployed only by nation-states, the latter can be widely constructive and deployed even by individuals. But this distinction is, dangerously, sweeping.

In the 20th century, when J. Robert Oppenheimer led work on the first WMD, no one had seen a bomb ravage entire cities. Yet, as soon as he saw the ruin, Oppenheimer’s instinct was to scale down the threat of planetary harm. As an afterthought, it was obviously late. In the 21st century, Big Tech, fashioning AI’s contentious future, pretends, through talk of Responsible AI, to want to evade Oppenheimer’s error. But there’s a crucial difference. AI’s capacity to go rogue on scale is infinitely greater than WMDs going rogue; even afterthought may be too late.

Many argue for regulating, not banning, AI, but who’ll regulate, soon enough, well enough? Or, is banning better until the world thinks this through?

Slippery slope

Recently, IBM and Microsoft renewed commitments to the Vatican-led Rome Call for AI Ethics to put the dignity of humans first. Then, Microsoft undermined their OpenAI ethics team and Google, its Ethical AI team, betraying hypocrisy over the spirit of these commitments, never mind the letter. Tech’s now walking back some of these betrayals, fearing backlash, but Rome’s call isn’t based on a hunch about tech overreach. Tech, in thrall to themselves, not just their tools, may put humanity last.

Disconcertingly, tech oracle Bill Gates is guarded but glib, “humans make mistakes too”. Even he suspects that AGI may set its own goals, “What… if they conflict with humanity’s interests? Should we prevent strong AI? … These questions will get more pressing with time.” Point is: we’re running out of time to address them, if AGI arrives sooner than predicted.

AI amplifies the good in humans, mimicking memory, logic, reasoning, in galactic proportions, at inconceivable speeds. AGI threatens to imitate, if dimly, intuitive problem-solving, critical thinking. AI fanatics fantasise about how it’ll “transform” needy worlds of food, water, housing, health, education, human rights, the environment, and governance. But remember, someone in Genesis 3:5 portrayed the prohibited tree too as a promise, of goodness: “You will be like God.”

Trouble is, AI will amplify the bad in humans too: in those proportions, at that speed. Worse, androrithms relate to thinking, feeling, willing, not just calculating, deducing, researching, designing. Imagine mass-producing error and corruption in distinctly human traits such as compassion, creativity, storytelling; indefinitely, and plausibly without human intervention, every few milliseconds. 

What’s our track record when presented power on planetary scale?

icon

Join Mercator today for free and get our latest news and analysis

Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.

Today’s WMD-capable and willing states shouldn’t be either capable or willing; that they’re often both is admission of catastrophic failure to contain a "virus". If we’d bought into the “goodness” of n-energy rather than the “evil” of n-bombs, over half, not just a tenth, of our energy would be nuclear. Instead, we weaponised. Do the rewards of “nuclear” outweigh its risks? Not if you weigh the time, money and effort spent in reassuring each other that WMDs aren’t proliferating when we know they are, at a rate we don’t (and states won’t) admit. Not if you consider nuclear tech’s quiet devastation

Oppenheimer’s legacy is still hellfire, not energy! 

Danger zone

Some claim that regulating, before sizing up AI’s power, will stifle innovation. They point to restraint elsewhere. After all, despite temptations, there’s been no clone-race, there are no clone-armies, yet. But — this is important — ethics alone didn’t pause cloning. Those constraints may not cramp AI’s stride. 

Unlike rogue cloning, rogue AI’s devastation might not be immediate (disease) or visible (death), or harm only a cluster (of clone-subjects). When AI does go rogue, it’ll embrace the planet; on a bad day that’s one glitch short of a death-grip. Besides, creating adversarial AI is easier than creating a malicious mix of enriched uranium-plutonium. That places a premium on restraint. 

But to Tech, restraint is a crime against the self, excess is a sign of authenticity, sameness isn’t stagnation but decay, slowness is a character flaw. And speed isn’t excellence, it’s superiority. Tech delights in “more, faster, bigger”: storage, processing power, speed, bandwidth. The AI “race” isn’t a sideshow, it’s the main event. Gazing at its creations, Tech listens for the cry, “Look Ma, no hands!” With such power, often exercised for its own sake, will Tech sincerely (or sufficiently) slow the spread of AI?

AI isn’t expanding, it’s exploding, a Big Bang by itself. In the 21st century alone, AI research grew 600 percent. If we don’t admit that, for all our goodness, we’re imperfect, we’ll rush, not restrict AI. Unless we quickly embed safeguards worldwide, rogue AI’s a matter of “when” not “if”. Like a subhuman toddler, it’ll pick survival over altruism. Except, where human fates are concerned, its chubby fists come with a terrifying threat of omnipresence, omniscience, and omnipotence. 

The AI-supervillain with a god-complex in the film Avengers: Age of Ultron delivers prophetic lines to humans. His (its?) mocking drawl pretends to be beholden; it’s anything but: “I know you mean well. You just didn’t think it through…How is humanity saved if it’s not allowed to… evolve? There’s only one path to peace. (Your) extinction!”

Presumably in self-congratulation, Oppenheimer nursed an exotic line, mistakenly thought to be from the Bhagavad Gita, but more likely from verse 97 of poet-philosopher Bhartrihari’s Niti Sataka: “The good deeds a man has done before, defend him.” But Oppenheimer didn’t ask if his deeds were good, or true, or beautiful. Worse, he glossed over another verse, indeed from the Gita (9:34): “If thy mind and thy understanding are always fixed on and given up to Me, to Me thou shalt surely come.”

“The will”, as a phrase, doesn’t require the qualifier “human will” because it’s distinctly human anyway, involving complexities we haven’t fathomed. Understanding it requires more than a grasp of which neurons are firing and when

Vast temptations

Granted, the mind generates thought, but the will governs it. And, as Thomas Aquinas clarified, the will isn’t about ordering the intellect, but ordering it toward the good.  That is precisely why techno-supremacists alone shouldn't shape what’s already affecting  vast populations. 

AI is too seductive to slow or stop. Tech will keep conjuring new excuses to plunge ahead. Sure, there are signs of humility, of restraint. As governments law up it is compliance that will act as a brake, delaying, if not deterring disaster. But Tech’s boast proves that it isn’t AI they see as saviors, but themselves. Responsible AI needs responsible leaders. Are Tech’s leaders restrained, respectful? Or does that question, worryingly, answer itself?

Professor of Ethics, Shannon French warns that when Tech calls for temperance that’s warning enough. Their altruistic alarmism seems a ruse to accelerate AI (more funding, more research) while pretending to arrest it (baking in checks and balances). Instead, what’s getting baked in? “Bias is getting baked” into systems used by industries and governments before they’re proven compatible with real-world lives and outcomes.

“People can’t even see into the black box and recognise that these algorithms have bias…data sets they were trained on have bias…then they treat the [results from] AI systems, as if they’re objective.”

Christopher Nolan’s film may partly, even unintentionally, lionise Oppenheimer as a Prometheus who stole fire from the gods and gave it to mankind. Pop culture lionises Tech too, as saviors, breathing on machines an AI-powered fire. Except, any fire must be wielded by humans ordered toward truth, goodness, beauty.

The name "Promethus" is considered to mean "forethought", but Tech is in danger of merely aping Oppenheimer’s afterthought.  Remember, self-congratulatory or not, Oppenheimer was fond of another Gita line (11:32): “Now I am become Death, the destroyer of worlds”.

********

Rudolph Lambert Fernandez is an independent writer who writes on culture and society. Find him on Twitter @RudolphFernandz

Cartoon by Brian Doyle

 

Showing 4 reactions

Please check your e-mail for a link to activate your account.
  • David Smith
    commented 2023-07-15 20:46:30 +1000
    AI is just a tool, like a pair of pliers or a jet aircraft. It will be a human disaster only if humans use it badly.
  • Friend
    followed this page 2023-07-14 21:52:44 +1000
  • mrscracker
    I read a very disturbing account in Malcolm Gladwell’s book “Outliers” of Oppenheimer attempting to poison a tutor & strangle a friend. It left serious questions about his mental health & character. And about how those from privilege can have more ways to escape the consequences of their actions.
  • Rudolph Lambert Fernandez
    published this page in The Latest 2023-07-12 12:02:45 +1000