3 Comments
15 hrs agoLiked by Marco Masi

Thank you for this great essay! I think it is indeed very important not to be blinded by the LLMs' enormous mastery of language and not to naively ascribe to them inner subjective states that they simply don't have.

Here are some more thoughts that came to me while I read your article:

Written language is an imperfect but still amazingly effective way to invoke complex subjective experiences in the mind of the reader, considering that it consists merely of a sequence of discrete symbols. The writer tries to evoke very specific experiences in the reader's mind, and so he or she optimizes the sequence of symbols accordingly.

Here, a major point is that, in normal human communication, this physically mediated transmission of experiential content from mind to mind involves subjective value judgments on both ends: as genuine experiencers, we know which sentences are good (poetic, surprising, funny, clear, to the point, ...) and which are bad (boring, nonsensical, unclear, out of context, ...).

One might therefore think that a machine without subjective experience, like an LLM, cannot participate successfully in this language game because it cannot judge the experiential value of the words it reads or writes.

But it somehow happens that all "good" texts with high human value—an almost vanishing subset within the much larger set of all possible texts—have characteristic statistical properties in common that can be learned by an LLM. When the LLM is producing word after word (with a certain degree of randomness) and always stays within the high-value subset, the produced text will likely evoke positive subjective experiences in the reader.

Now, this is clearly a kind of "mimicry," because the LLM is using objective, publicly available statistical properties of word sequences (derived from the training corpus) instead of subjective, private experiences to compose its sentences.

But I suspect that to a certain extent we humans do the same when we speak or write. At least in my personal introspection, the next word I produce (most of the time) just pops up from the "unconscious," without continuous conscious supervision. Could it not be that we have trained our language capabilities to such an extent that it becomes almost an automatic process, like riding a bike? This automatic process might even use learned statistical properties of 'good' sentences in a way similar to an LLM.

And yet, while our unconscious next-word generator is producing text, this evokes a simultaneous stream of subjective experiences in our mind. We genuinely experience our self-generated language and thus we can evaluate it and guide it in a certain desired direction.

I feel the same way when I am improvising jazz on the piano: the musical phrases just flow from the unconscious and are like semi-random samplings from a large learned repertoire. They are automatically fitted into the momentary context of the improvisation. The mind monitors this flow of objective sound, producing a chain of complex emotions that are continuously evaluated. The mind gives occasional feedback to the lower levels in an attempt to optimize pleasure. Sometimes creative "errors" happen, and I play something that I have never played before but which sounds great. I will then try to remember that and make it part of my repertoire of phrases.

So, at least in a superficial way, we may on short timescales rely on learned statistical regularities, but on larger timescales use our subjective experience to guide the micro-process of next-word or next-sound production.

Expand full comment
author

That’s right. I think that we are, in part, like ChatGPT, and yet we feel there is more in us (or there is something deeply valuable missing in LLMs.) On the one hand, we are (perhaps for the most part) driven by subconscious layers. This part is more easily reproducible by machines because it is mechanical and learns more through repetition to build (mostly unreflective) habits rather than by true insight. Habits that are the repetition of learned relationships due to education, the cultural environment, mental and physical exercises, etc. The most unconscious part is quite mechanical and relies much more on an automatic processes (like riding a bike,) and much less on subjective experiences (but riding a bike implies a lot of perceptions as well… doesn’t it?) and has the tendency to “parrot” the learned patterns, also the patterns of thought, and (I guess this is the real function of the brain.) Without this subconscious layer we probably couldn't exists, Too many things would have to be taken into account consciously to do something in this complex world. This is one of the reasons why LLMs seem so successful in emulating us and can play the game apparently implying subjective experiences: they identify these patterns and complex relational structures in the language that already reflect the experiences and then emulate the cognitive procedures standing behind the “good” texts with content of subjective experience. It’s what Harnad calls the “indirect grounding” of LLMs (see https://arxiv.org/abs/2402.02243) The similarity with our cognition arises due to the similarity of our sub and unconscious processes that do not necessarily require the direct experiential dimension and interaction of the environment, and can be retrieved via inferential statistical processes on a large dataset imprinted in a “qualia-less” archive, but created and mediated by conscious agents.

However, we should not forget that besides the sub/unconscious there is another part in us that AI doesn’t have: something that comes from an opposite dimension of an “awakened” consciousness. This higher state of consciousness, let’s call it the “super-conscious,” (I have described it in more detail in Sprit calls Nature) influences our thinking and doing as well. This super-conscious (the source of revelations, intuitions, inspirations, the hearing of great harmonies, creative “errors”, etc.) is all based on conscious experience (not only on the physical level, but on all psychological dimensions) but is usually mistaken for the subconscious (the basal instincts, fears, etc., and not only emotionally but also in form of a mechanical mental reasoning.)

And that’s where a lot of confusion sets in. The two are the opposite of one another and flattening them down to the same layer creates a lot of paradoxes. That’s why LLMs seem to have a human component, and, vice versa, we are surprised of ourselves of how we behave like LLMs. Lately I made ChatGPT review a paper like a referee. Its objection reflected quite well the misunderstandings and lack of reflectiveness of certain referees. What doesn’t fit into our conventional thought patterns is instinctively rejected. But we can be aware and distinguish this intellectual instinctiveness (which is also important and plays its practical and evolutionary role… but well…) from the real source that produces flashes of wisdom and creativity. It’s when we fail to do so that problems arise.

I’m not a musician but guess that musicians heavily rely on both processes: one must learn the mechanics of the instrument and create a habit in the mind and body to use it (the “physical mind”), that then tends to reproduce as best as possible something according to certain rules and laws, and yet one also opens to higher spheres of harmony where the music flows through from realms we don’t understand but sometimes rush into our awareness like a river of fresh and crystalline water. Also on these higher planes there is an automatism, an instant knowing, but it has much more a sense of certitude and reflects qualitatively very different sense of perfection.

So, ultimately, we are a potpourri of different layers of consciousness that are intermixed with each other. You may like to read the part V of my ten-part essay (I know, it’s much too long, I hope you can hear a text to speech in the app) which deals with the planes and parts of the being: https://marcomasi.substack.com/p/the-unexpected-comeback-of-the-conscious-66c

Actually, I have written a paper on the consciousness-meaning-AGI thing. I’m planning to send it to some journal in the next days (still have no idea which…) and might it upload on philpapers.

Anyway, whatever the technical details (on which we might agree or not), my main message is that of not reducing our human existence to only one layer. We are, so to speak, “multi-layered” and we should keep this always in mind when talking about the (necessarily mono-layered) AI. If it reproduces well one of our layers it doesn’t mean that scaling it up will magically lead to human-like AGI (in fact, they are now realizing that scaling LLMs leads to asymptotical growth of reasoning skills.) My belief is that all this hype around AI will tell us much more about ourselves, in the sense that it will tell us what we are, and what AI is not, and possibly will never be.

Expand full comment

An alternative model of mind along these lines... "The Challenge to AI: Consciousness and Ecological General Intelligence," 2024, DeGruyter.. on Amazon. I think you'd enjoy it...

Expand full comment