Could AI become conscious?

Not long ago, a Google engineer created a stir in the world of artificial intelligence by claiming that its flagship chatbot was sentient. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Blake Lemoine.

“I know a person when I talk to it,” Lemoine told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” 

Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. 

The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.

No wonder Google wanted to hose down the alarming implications of Lemoine’s views.

So who is right – Lemoine or his bosses at Google? Is it time to press the panic button?  

Liquid syntax error: Error in tag 'subpage' - No such page slug home-signup

Defining consciousness

Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?

Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.

First, Aquinas asserts the existence of a “passive intellect”, the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.

Second, Aquinas says that an “agent intellect” uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes “knowledge”.

Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.

This leads to a working definition for consciousness: consciousness is the awareness of the cognitive and decision-making processes, including the steps involved in acquiring, evaluating and applying knowledge. A person is said to be aware of their sense of sound, sight, smell etc., aware of their feelings, aware of their imaginations, aware of their judgments, aware of their knowledge, aware of their choices. Consciousness is and can be included in all or any of these steps. 

Can AI become conscious?

When we compare the different levels of the human cognitive and decision-making processes to Artificial Intelligence it’s easy to spot big differences.

External experience. Humans experience emotions together with the acquisition of sense knowledge.  AI simply acquires data. This emotional component adds to the knowledge of humanity in a way that computers can’t.

Sense images and memories. AI excels in recall and data retrieval, far surpassing human capacity. In this area AI excels, without a doubt.

Agent intellect. Humans actively direct their thoughts and they abstract concepts from the raw sense data. This process is self-directed and autonomous. AI merely reveals patterns of information; it is not self-directed. The pattern is the result of an algorithm which has been programmed by a human.  AI activity is prompted first by human inquiry.

Choice and will. Humans make conscious decisions with goals in mind, while AI does not exhibit characteristics of personal choice or intentionality.

AI exhibits behaviors associated with intelligence—memory recall, summarization, pattern recognition, prediction capabilities—but it lacks the element of self-direction which is characteristic of humans.

AI does not generate its own thoughts; it merely responds to its programming and responds to whatever it is prompted to. AI does not experience emotions conjointly as it gathers sense data which is merely installed into the computer.

Sometimes AI does seem to generate novel thoughts, but this is dependent on data that it already possesses and is the result of a learned pattern.  Humans can reflect on their thinking. This allows them to correct themselves without external prompts. Humans can develop concepts that are not dependent on sense data.

In short, AI merely simulates human cognitive and volitional activities.  This means that it is not conscious.

Final Thoughts

Proponents of AI consciousness often fail to define consciousness adequately before making claims of AI consciousness. From a Thomistic perspective, human consciousness is multifaceted, involving perception, intellect, will, and self-direction.

To my mind, the most significant difference is found in decisionality. AI does not make the personal decisions which are a clear indication of consciousness.  AI, while powerful in data processing, does not exhibit those core attributes that define human consciousness.

When I ask an AI chatbot a question and it states that it has other things to do and will answer tomorrow, then I will revisit the question.


Forward this article to your friends!  


George Matwijec is an adjunct philosophy teacher at Immaculata University who specializes in teaching knowledge and logic.  He authored a book entitled “My Interview with AI”. He can be reached at iteacher101.com 

Image credits: Bigstock  


 

 

 

 

 

Showing 3 reactions

Sign in with

Please check your e-mail for a link to activate your account.
  • Tim Lee
    I shared my comment with some friends and we had a chuckle over Deep Blue, Deep Mind and “Deep Shit”. Humour is another quality that AI will never have, though it can simulate it just as it can simulate emotions. In summary, these attributes make us different from machines:

    - self-awareness
    - agent intellect or autonomy
    - will or decisionality
    - universal cognition
    - creative stupidity
    - emotions and humour
    - creativity of relationships
    - creativity of the human spirit.

    There is one more attribute machines will never have – the Spirit of God. He created us in his image and we create machines in our image.

    An impoverished sense of ourselves as being different from other members of the animal kingdom only in terms of our superior grey matter makes our machines our gods. In the process, we become more and more like the soulless idols we worship. In our quest to be transhuman, to be free from the ‘shackles’ of our humanity, we become sub-human and then inhuman.
  • Tim Lee
    I like to say that artificial intelligence is no match for natural stupidity. But what is intelligence and what is stupidity? I submit that both are intrinsic to human consciousness.

    AI is essentially machine learning – heuristic pattern recognition, an iterative process of generating results from input data and feeding the results back into the loop to arrive at new results. Within specific domains like a chess program, this works well – typically better than Human Intelligence (HI). An AI across all domains would require more processing power than any computer now or in the future can deliver. In just the domain of chess, there are more possible chess moves than there are atoms in the universe. Extrapolate this fact and you get the picture.

    In addition to will and decisionality, HI is different from AI in three fundamental ways:

    1. Humans make mistakes but mistakes can lead to new discoveries, like the invention of vulcanised rubber when sulphur was accidentally spilled onto raw rubber in a lab, making possible the myriad uses of rubber today. AI’s error is that sooner or later it ends up in a nonsensical self-referential loop of inbred data.

    2. We have multi-faceted relationships with one another that give rise to creativity beyond anything machines can ever deliver, even if they are somehow connected to one another in a way that enables them to communicate sans human intervention.

    3. Humans have a soul – a spirit that no machine will ever have. We can understand ‘soul’ as the integration of mind and heart where the whole is more than the sum of its parts. A machine has no heart and therefore no soul. The human spirit is ever creative. Creativity in its true sense is something no machine will ever replicate.
  • George Matwijec
    published this page in The Latest 2025-03-07 10:47:13 +1100