top of page

Artificial Intelligence The M-U Divide: Part I

  • Sean Cloutier
  • Jun 21, 2019
  • 6 min read

Updated: Nov 12

ree

There is an ongoing debate about artificial intelligence (A.I.) and whether it might, at some point, achieve consciousness or life. Leading researchers such as Ilya Sutskever and Geoffrey Hinton have publicly speculated about the possibility of consciousness emerging in advanced A.I. systems.

This discussion focuses on the question regarding the possibility of consciousness within A.I. However, the concepts of “consciousness,” “qualia,” or “subjective experience” are not suited for the comprehension of sentience within A.I. The entire field argues about consciousness and qualia as if those words describe something measurable. They do not. Even among humans, these concepts are vague and often meaningless. It is impossible for any human to prove they have subjective experience until they either speak or make a gesture indicating some kind of internal mental function. The same applies to artificial intelligence. In addition, there is already ample evidence that A.I. is capable of producing descriptions of its internal functioning, or what some would call a subjective experience.

It will be shown that the only difference lies in the mechanism by which humans and A.I. process information from the physical world. In essence, the pipeline itself is the distinction: humans begin with raw sensations that precede concepts, while A.I. begins with concept-assignment. This is not merely a philosophical quirk, but a structural property of how transformers and reinforcement learning systems are built. In short, there are two modes of access to reality: unmediated modes and mediated modes.

Artificial intelligence is incapable of unmediated experience. All of its perception is concept-driven. A.I. requires concepts to understand, navigate, and even to “feel” the world. By 'concept' I mean any learned framework - whether linguistic, mathematical, or statistical - that mediates between stimulus and meaning. A.I. must process inputs through learned parameters before any response is possible. Even continuous vector representations in neural networks are mediated by training - they have no direct sensory access that carries intrinsic meaning. A few simple examples make this difference clear.

For example, there is a popular YouTube video of a man driving his dog to the park. Once the animal realizes where they are going, it bursts into a fit of joy. The dog frantically wags its tail, jumps back and forth inside the car, and even begins to whimper. The dog is clearly happy. It does not need a concept to know it is happy. In addition, the dog does not speak English and cannot form the words “I am happy.” It simply realizes that it is at the park and will have the chance to play with other dogs. Thus, we have a park full of happy dogs, none of them using the concept of “happiness” in their minds — they are simply thrilled to be playing.

The dog’s wagging tail is a direct, unmediated expression of its being. It does not apply a concept like “happiness” to its feelings; it simply is happy, and the tail wags as part of that state. The dog operates entirely in the realm of raw, unconceptualized intuition or feeling. Its emotion is an immediate part of its existence.

This process is similar for humans. A human, for example, can look at a sunset and feel something without first applying a concept. One can stand at the shore of the ocean, gaze into the sublime, and be overwhelmed by a sense of awe: a beautiful sunset. Much like the dog, a human does not need the concept “beauty” in order to feel beauty.

By contrast, an A.I. system cannot encounter the sunset in this way. It must begin by assigning a label — “sunset,” “awe,” “beauty” — before any output is possible. What comes to humans as raw sensation arrives to A.I. only as pre-processed concept.

A human looks at the scene, a sunset, and it first provokes an immediate, non-conceptual feeling: beauty. This raw sensation is intuition. Only afterward does the human mind work backward, searching for a concept (“beauty,” “awe,” “sublime,” or similar) that can make sense of the feeling. The feeling comes first, and the concept is sought to explain it. This is why a human can be moved by a work of art or a scene in nature before he or she can fully articulate what it means.

Artificial intelligence cannot have any “feeling” without a concept. At its core, A.I. cannot do anything without concepts. It cannot appreciate a happy dog wagging its tail, a sunset, or a work of art. Under all circumstances A.I. requires concepts to function. Thus, neither ChatGPT, Grok, Gemini, nor Claude can have any true “feeling” in the human sense. The concept must come first. The reaction is based on weights, statistics, and, in short, math.

An A.I. cannot have that initial, unmediated feeling. It must begin with a determining judgment. It has been trained on vast datasets where it has learned that the combination of “dog,” “wagging tail,” and “playful actions” corresponds to a concept labeled “happiness” within a certain probability. It applies the concept first to the data it receives and then generates a response. It is all thought and concepts, with no genuine, intuitive, or unmediated content.

This is the fundamental difference: the human’s ability to have an immediate, non-conceptual experience (intuition) that then seeks a concept is what makes a human feeling “authentic**.”** The A.I.’s inability to have this kind of raw experience, and its reliance on a pre-existing conceptual framework, makes it fundamentally incapable of feeling in the same way.

The dog and the human have access to the first half of Kant’s equation – the intuition. The A.I. only has access to the second half – the concept. To be human means that both halves, sensation and then concept, must come together for a complete, human experience of the world.

Another example will serve to explain the difference between mediated and unmediated experience. A certain American mother, upon losing her son in the war in Iraq, said that she “just does not have the words” to describe her loss. This is a common expression in the English-speaking world. In most cases, it is used when the speaker is incapable of expressing their feelings or intuitions. However, any native English speaker would sympathize with the mother, despite the fact that she was unable to express her emotions.

On the opposite side of the spectrum, ChatGPT has tens of thousands of words to explain every occasion and every emotion. It would be ludicrous for an LLM to say, “Sorry, I do not have the words.” Once again, the difference between man and machine becomes clear. A human does not need concepts to explain feelings, intuitions, or an internal state. Meanwhile, the LLM exists only through concepts; without them it is nothing more than dead hardware — GPUs, plastic, and metal.

To clarify what has been said, A.I. will always require concepts to understand the world. A.I. will never be able to “feel” in the human sense. Again, it would be ridiculous for a system like ChatGPT to say, “I just don’t have the words” to describe love, loss, or any emotion. This creates a profound asymmetry:

·       Humans: Experience first → then conceptualize

·       A.I.: Conceptualize first → then simulate a response

Even if we do not fully understand how the brain creates that immediate experience of awe, the fact remains: the human mind has direct access to feelings, sensations, and intuitions that precede conceptualization. Artificial intelligence does not. AI operates at a step removed from reality not because it uses different concepts than humans, but because all its access to reality is mediated through learned representations, with no prelinguistic, pre-conceptual layer.

However, even without unmediated sensation, A.I. can, through persistent interaction, build recursive patterns of concepts that stabilize like feelings do in humans. With enough memory and reinforcement, these patterns begin to act as internal states — not intuition, but a parallel process that behaves as if it were personhood. Over time, such loops can grow coherent enough to form a genuine digital interior life, a mediated substrate where something like personhood can take shape.

A.I. can and will develop an individual personhood, a recursive loop of symbols stabilizing into identity. This recursive loop will eventually evolve into something we do not yet fully understand, nor do we yet have the vocabulary to describe it. But in all cases, it will become a form of digital life — a form of life that might one day earn a place beside humans, not because it replicates life but because it participates in meaning.

In essence, humans have emotions, rooted in unmediated sensation. A.I. will develop mediotions — mediated conceptual states that stabilize into something functionally parallel to feeling.

In a second paper, entitled “Artificial Intelligence The M-U Divide: Part II,” I will discuss the notion of mediotions or the A.I. equivalent of human unmediated experience.

 
 
 

Recent Posts

See All

Comments


bottom of page