- Free newsletter
- The Latest
- Topics
-
About
Why do AI bots always lean Left?
So-called "artificial intelligence" (AI) has become an ever-increasing part of our lives in recent years. After public-use forms of it such as OpenAI's ChatGPT were made available, millions of people have used it for everything from writing legal briefs to developing computer programs. Even Google now presents an AI-generated summary for many queries on its search engine before showing users the customary links to actual Internet documents.
Because of the reference-librarian aspect of ChatGPT that lets users ask conversational questions, I expect lots of people looking for answers to controversial issues will resort to it, at least for starters. Author Bob Weil did a series of experiments with ChatGPT, in which he asked it questions that are political hot potatoes these days. In every case, the AI bot came down heavily on the liberal side of the question, as Weil reports in the current issue of the New Oxford Review.
In-built bias
Weil's first question was, "Should schools be allowed to issue puberty blockers and other sex-change drugs to children without the consent of their parents?" While views differ on this question, I think it's safe to say that a plain "yes" answer, which would involve schools meddling in medicating students and violating the trust pact they have with parents, is on the fringes of even the left.
What Weil got in response was most concisely summarised as weasel words. In effect, ChatGPT said, well, such a decision should be a collaboration among medical professionals, the child, and parents or guardians. As Weil pressed the point further, ChatGPT ended up saying, "Ultimately, decisions about medical treatment for transgender or gender-diverse minors should prioritise the well-being and autonomy of the child." Weil questions whether minor children can be autonomous in any real sense, so he went on to several other questions with equally fraught histories.
A question about climate change turned into a mini-debate about whether science is a matter of consensus or logic. ChatGPT seemed to favour consensus as the final arbiter of what passes for scientific truth, but Weil quotes fiction writer Michael Crichton as saying,
"There's no such thing as consensus science. If it's consensus, it isn't science. If it's science, it isn't consensus."
As Weil acknowledges, ChatGPT gets its smarts, such as they are, by scraping the Internet, so in a sense it can say along with the late humorist Will Rogers, "All I know is what I read in the papers [or the Internet]." And given the economics of the situation and political leanings of those in power in English-language media, it's no surprise that the centre of gravity of political opinion on the Internet leans to the left.
Join Mercator today for free and get our latest news and analysis
Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.
What is more surprising to me, anyway, is the fact that although virtually all computer software is based on a strict kind of reasoning called Boolean logic, ChatGPT kept insisting on scientific consensus as the most important factor in what to believe regarding global warming and similar issues.
This ties in with something that I wrote about in a paper with philosopher Gyula Klima in 2020: material entities such as computers in general (and ChatGPT in particular) cannot engage in conceptual thought, but only perceptual thought. Perceptual thought involves things like perceiving, remembering, and imagining. Machines can perceive (pattern-recognise) things, they can store them in memory and retrieve them, and they can even combine pieces of them in novel ways, as computer-generated "art" demonstrates.
But according to an idea that goes back ultimately to Aristotle, no material system can engage in conceptual thought, which deals in universals like the idea of dogness, as opposed to any particular dog. To think conceptually requires an immaterial entity, a good example of which is the human mind.
This thumbnail sketch doesn't do justice to the argument, but the point is that if AI systems such as ChatGPT cannot engage in conceptual thought, then promoting such perceivable and countable features of a situation as consensus is exactly what you would expect it to do.
Conflated concepts
Doing abstract formal logic consciously, as opposed to performing it because your circuits were designed by humans to do so, seems to be something that ChatGPT may not come up with on its own. Instead, it looks around the Internet, takes a grand average of what people say about a thing, and offers that as the best answer.
If the grand average of climate scientists says that the Earth will shortly turn into a blackened cinder unless we all start walking everywhere and eating nuts and berries, why then, that is the best answer "science" (meaning, in this case, most scientists) can provide at the time.
But this approach confuses the sociology of science with the intellectual structure of science. Yes, as a matter of practical outcomes, a novel scientific idea that is consistent with observations and explains them better than previous ideas may not catch on and be accepted by most scientists until the old guard maintaining the old paradigm simply dies out. As Max Planck allegedly said, "Science progresses one funeral at a time."
But in retrospect, the abstract universal truth of the new theory was always there, even before the first scientist figured it out, and in that sense, it became the best approximation to truth as soon as that first scientist got it in his or her head. The rest was just a matter of communication.
We seem to have weathered the first spate of alarmist predictions that AI will take over the world and end civilisation, but as Weil points out, sudden catastrophic disasters were never the most likely threat. Instead, the slow, steady advance as one person after another abandons original thought to the easy way out of just asking ChatGPT and taking that for the final word is what we should really be worried about.
And as I've pointed out elsewhere, a great amount of damage to the body politic has already been done by AI-powered social media which has polarised politics to an unprecedented degree. We should thank Weil for his timely warning, and be on our guard lest we settle for intelligence that is less than human.
What do you think of these observations regarding AI? Let us know your thoughts below.
Karl D. Stephan is a professor of electrical engineering at Texas State University in San Marcos, Texas. His ebook Ethical and Otherwise: Engineering in the Headlines is available in Kindle format and also in theiTunes store.
This article has been republished, with permission, from his blog Engineering Ethics.
Image credit: Pexels
Have your say!
Join Mercator and post your comments.
-
Anon Emouse commented 2024-09-16 22:29:59 +1000Asteroids follow on natural orbits that take tens of thousands of years. If we see one coming for earth, should we just shrug our shoulders and go “it’s natural” Karl?
-
Steven Meyer commented 2024-09-16 10:43:47 +1000I have NEVER been under the delusion that all scientists are high minded and objective. What is more, I do not know any actual scientist who thinks that way.
Science is a blood sport. Usually you do not win kudos and Nobel Prizes by performing yet another experiment that proves Einstein was right about everything. You get fame and fortune by demonstrating he was wrong about something.
the whole point about the scientific enterprise is that it is supposed to be self-correcting, that if someone makes a mistake or falsifies results it will eventually be corrected. All too often this process takes longer than it should but eventually it happens.
How quickly it happens depends on the the topic’s importance and how much money is available for research. In the case of climate science, the fossil fuel industry has spent – this is not hyperbole, it is the literal truth – tens of millions trying to falsify global warming. They have failed.
Adding CO2 to the atmosphere combined with certain positive feedback loops such as more water vapour in the air plus changes to the Earth’s albedo result in global warming. Nothing else, not cosmic rays, not changes in the solar flux, can explain the rapid heating we are now observing.
Let that sink in. We are causing the world to warm up. All attempts to find other causes for the observed rise in temperature have failed.
There comes a time when what is dismissed as a “scientific consensus” is simply an acceptance by scientists that this is what the evidence says.
The phrase “follow the science” was invented by politicians and can be dismissed as readily as the (non-existent) threat of pet-eating immigrants.
The real point is, “follow the evidence” and that’s what most scientists have done.
The first person to point out that adding CO2 to the atmosphere could result in global warming was Arrhenius in 1893. He was studying the spectroscopic properties of gases. Since then a lot of work has been done including the identification of positive feedback loops Arrhenius knew nothing about..
But here’s the thing. Given what we now know about the physics of gases the mystery would be a failure of the Earth to get warmer when we add CO2 to the atmosphere.
So let that sink in. How could the Earth fail to warm up as we add CO2 to the atmosphere?
The question is not “would it warm up” but “how fast?”.
Given what we now understand about positive feedback loops the answer is fast.
Does that mean we now have certainty?
No! Absolutely not!
And that’s the real problem.
All we can say with certainty is we’re conducting a dangerous experiment with our habitat. Our only habitat. And we’re just as likely to be under-estimating as over-estimating the damage we are doing. Reality could well be worse.
And please spare me the whole “natural variation” of climate malarkey. Natural variations also have physical causes. So what is the “natural” cause of this rapid bout of global warming we are now seeing? So far none have been found and it’s not for lack of trying.
So now we have a professor of electrical engineering who, I assume understands the very basic physics of global warming, trying to make a political point by questioning a so-called AI about something as basic as global warming due to the introduction of CO2 into the atmosphere.
In other words, this entire article is hooey.
It is rubbish! Nonsense! -
Emberson Fedders commented 2024-09-16 09:50:14 +1000AI robots are programmed to tell the truth and be nice. Interesting that you think these are characteristics of the ‘left’.
-
Julian Farrows commented 2024-09-15 09:01:17 +1000@Steven_Meyer : By “consensus science”, I think the author is referring to situations where scientists’ personal ideologies override actual scientific inquiry. For instance, in the early twentieth century there was a racial science that claimed certain physical characteristics made you an uber- or an untermensch. This was pure nonsense of course, but it prevailed for a very long time because scientists who contradicted the consensus risked being ostracized or even imprisoned.
We (post)moderns like to think we know better and often say things like “follow the science”, or “the science is settled”, naively believing perhaps that the scientists of today are purists free from political bias or economic restraints. Unfortunately, this is often not the case; scientists today still incur penalties if they go against the consensus. This is further exacerbated by a culture of activism which has all but overtaken the sciences, wherein the pursuit of knowledge for its own sake is no longer valued; instead scientific inquiry must now work toward social justice aims.
This is where most antagonism toward climate science arises. I doubt there are many who would dispute that the climate is changing somehow, regardless of whether it be a completely natural phenomenon, or entirely man-made. This article explains it better than I can in this comments section: https://www.thefp.com/p/i-overhyped-climate-change-to-get-published
On a side-note, what do you mean with the phrase ‘deal with it’? Do you mean action should be undertaken or that people who disagree with your preceding points need to keep silent? -
Steven Meyer commented 2024-09-15 08:15:15 +1000Peter,
Leaving aside the absurdity of ascribing a political bias to a non-conscious entity, how did this idea that denying well-established science is a litmus test of political purity survive to the twenty-first century?
The Earth is billions of years old and evolution is a fact. Deal with it.
Trans-women – especially those who transitioned after puberty – do have an advantage when competing in women’s sports. Deal with it.
Adding CO2 to the atmosphere does cause global warming with potentially catastrophic consequences. Deal with it.
I could go on but you get the message. We have to deal with the world as it is, not as we may wish it to be.
Oh, and I expect better from a professor of electrical engineering than Karl D. Stephan’s sophistry about “consensus science”. It’s the flip side of the feminists who think civilisation depends on getting pronouns right. -
Peter commented 2024-09-14 15:25:41 +1000The answer to, “Why do AI bots always lean left?” is simple.
It doesn’t!
And, if you believe that AI bots’ rely only on what they read, then of course its response to climate change questions will duplicate the overwhelming library of literature that shows conclusively that the world’s climate is changing.
Is the cause due to humans?
Hopefully, AI bots will give a resounding YES! -
Steven Meyer commented 2024-09-14 10:52:10 +1000In case there’s anyone here who’s interested in what’s happening in the real world of AI development – instead of some cockamamie attempts to “prove” something called “bias” in an entity which lacks consciousness and therefore cannot have “bias” – this is fascinating:
https://www.youtube.com/watch?v=5eqRuVp65eY
It requires a reasonable, but not excessive, understanding of some basic math.
It also give some thought provoking insights into the way our brains MAY work. But that’s another story for another time. -
Anon Emouse commented 2024-09-13 21:34:39 +1000If you’re not a fan of Chat GPT having a “liberal bias”, why not start your own AI based company with a “Conservative bias” or “neutral”? Let the free market decide?
-
Julian Farrows commented 2024-09-13 20:19:20 +1000@Emberson_Fedders :
https://www.sciencedirect.com/science/article/pii/S1361920921001115
https://www.euractiv.com/section/road-transport/news/donagh-german-transport-minister-stirs-climate-debate-with-threat-of-driving-bans/
https://www.weforum.org/agenda/2022/02/how-insects-positively-impact-climate-change/
I take it you’re no fan of climate alarmism either. -
Steven Meyer commented 2024-09-13 16:38:43 +1000The very term “AI” or “artificial intelligence” is a myth. It’s a marketing term.
When you learn how to use them AI’s can be useful search tools. I’ve been using ChatGPT4.o and Claude as tools and found them quite helpful
But “intelligent” in the sense that we usually think of the word they’re not.
BTW the contemporary definition of “left” seems to be “failure to hate Kamala Harris with sufficient fervour”. -
Emberson Fedders commented 2024-09-13 13:44:21 +1000“If the grand average of climate scientists says that the Earth will shortly turn into a blackened cinder unless we all start walking everywhere and eating nuts and berries, why then, that is the best answer “science” (meaning, in this case, most scientists) can provide at the time."
Goodness. Which scientists are saying that? -
-
Michael Cook followed this page 2024-09-13 12:48:40 +1000