Discussion about this post

User's avatar
Claus Metzner's avatar

Thank you for this great essay! I think it is indeed very important not to be blinded by the LLMs' enormous mastery of language and not to naively ascribe to them inner subjective states that they simply don't have.

Here are some more thoughts that came to me while I read your article:

Written language is an imperfect but still amazingly effective way to invoke complex subjective experiences in the mind of the reader, considering that it consists merely of a sequence of discrete symbols. The writer tries to evoke very specific experiences in the reader's mind, and so he or she optimizes the sequence of symbols accordingly.

Here, a major point is that, in normal human communication, this physically mediated transmission of experiential content from mind to mind involves subjective value judgments on both ends: as genuine experiencers, we know which sentences are good (poetic, surprising, funny, clear, to the point, ...) and which are bad (boring, nonsensical, unclear, out of context, ...).

One might therefore think that a machine without subjective experience, like an LLM, cannot participate successfully in this language game because it cannot judge the experiential value of the words it reads or writes.

But it somehow happens that all "good" texts with high human value—an almost vanishing subset within the much larger set of all possible texts—have characteristic statistical properties in common that can be learned by an LLM. When the LLM is producing word after word (with a certain degree of randomness) and always stays within the high-value subset, the produced text will likely evoke positive subjective experiences in the reader.

Now, this is clearly a kind of "mimicry," because the LLM is using objective, publicly available statistical properties of word sequences (derived from the training corpus) instead of subjective, private experiences to compose its sentences.

But I suspect that to a certain extent we humans do the same when we speak or write. At least in my personal introspection, the next word I produce (most of the time) just pops up from the "unconscious," without continuous conscious supervision. Could it not be that we have trained our language capabilities to such an extent that it becomes almost an automatic process, like riding a bike? This automatic process might even use learned statistical properties of 'good' sentences in a way similar to an LLM.

And yet, while our unconscious next-word generator is producing text, this evokes a simultaneous stream of subjective experiences in our mind. We genuinely experience our self-generated language and thus we can evaluate it and guide it in a certain desired direction.

I feel the same way when I am improvising jazz on the piano: the musical phrases just flow from the unconscious and are like semi-random samplings from a large learned repertoire. They are automatically fitted into the momentary context of the improvisation. The mind monitors this flow of objective sound, producing a chain of complex emotions that are continuously evaluated. The mind gives occasional feedback to the lower levels in an attempt to optimize pleasure. Sometimes creative "errors" happen, and I play something that I have never played before but which sounds great. I will then try to remember that and make it part of my repertoire of phrases.

So, at least in a superficial way, we may on short timescales rely on learned statistical regularities, but on larger timescales use our subjective experience to guide the micro-process of next-word or next-sound production.

Expand full comment
Steve Robbins's avatar

An alternative model of mind along these lines... "The Challenge to AI: Consciousness and Ecological General Intelligence," 2024, DeGruyter.. on Amazon. I think you'd enjoy it...

Expand full comment
1 more comment...

No posts