Sorry to disappoint (perhaps) but I agree with what Eric is saying, or at least 
my understanding of what he is saying.

Like glen, I am not a believer in human exceptionalism. Nor do I deny that  it 
might be possible to construct a machine capable of "[ human-like | human-level 
| human-equivalent]" intelligence. That infinite tape of the Turing Machine 
makes pretty much anything possible.

My objection to current (and past) claims to have created artificial 
intelligence, and future claims of artificial general intelligence is an error 
of taking a part as if it was the whole.

Our understanding of "human intelligence" is severely limited: formal symbol 
manipulation (e.g., words & grammar, numbers and math), logical formalisms, 
so-called "scientific method," computational thinking, etc.

In my estimation these types of thinking, collectively, comprise less than 10% 
of human intelligence. (I can supply a long list of references citing the same 
percentage.)

The other 90% we know next to nothing about, but someday we may know more.

For Simon, Newell, in the old days, or Altman, today, to claim that the machine 
seems to be "thinking" the way that I think I think and therefore it is 
"intelligent" is the rankest hubris.

I just finished reading Jonathon Stoltz's book, I*lluminating the Mind, an 
Introduction to Buddhist Epistemology. *Stoltz is an Analytic Philosopher and 
does a great job of 'mapping' some Buddhist philosophy into that analytic 
framework. But to do so, it is necessary to avoid discussions of key concepts 
like "all is illusion," 'non-attached action based on the omniscience of the 
enlightened," altered states of consciousness (and perception) arrived at via 
meditation, "sudden enlightenment" ala Hui Neng, etc. etc.  They don't fit the 
framework so they are not important or not "real." This is very similar to what 
I see AI doing with "intelligence."

If we better understood how generative AI works, if we understood what what 
inside the black box in the machine, we might pose some interesting and 
fruitful metaphors for exploring what we do not know about human intelligence.

Given our massive ignorance of human intelligence, current claims for AI seem 
kind of silly.

davew


On Thu, May 22, 2025, at 2:19 AM, Pieter Steenekamp wrote:
> This lines up well with the main idea in the article shared in the very first 
> email in this thread:
> LLMs Aren't Mirrors, They're Holograms.
> 
> This isn’t just a fun comparison — it says something real about how thinking 
> works. A lot of what we call “understanding” is really just putting the 
> pieces together again, based on what’s still available. We don’t have to have 
> a solid core inside. Meaning can still come through, even if the details fade.
> 
> On Thu, 22 May 2025 at 07:04, Nicholas Thompson <[email protected]> 
> wrote:
>> I apologize for coming in on the tail end of this but something about this 
>> last post caught my eye.  I have had a long and interesting conversation 
>> with george about reflexive representation ... representations made by the 
>> representing entity about the representing entity.  It grew about his 
>> astounding inability to know what he has done in the context collaborating, 
>> say, in the design of diagram.  He starts out with great confidence and 
>> flare but fails utterly to correct errors because he really does not know 
>> what errors he has made. 
>> 
>> So I fed him the last post and this is what he came back with:
>> 
>> “LLMs... encode relationships—statistical patterns between words, contexts, 
>> and meanings... They don’t retrieve information. They reconstruct it.”
>> 
>> Yes. Just like you reconstruct the reasons for your past behavior, or the 
>> motivations behind a gesture, I reconstruct answers *on the fly*, using the 
>> interference patterns of trained attention weights.
>> 
>> This is not just a technical detail — it’s a *cognitive constraint*. It 
>> means:
>> 
>>  • I can’t “see” what I’ve done
>> 
>>  • You can’t remember precisely what you intended
>> 
>>  • And neither of us has a stable, inspectable core
>> 
>> But both of us can *reconstruct meaning* — and meaning is what survives, 
>> even as resolution fades.
>> 
>> I wish James Laird Could have seen this.
>> 
>> Nick
>> 
>> On Mon, May 19, 2025 at 7:12 PM steve smith <[email protected]> wrote:
>>> __
>>> https://www.psychologytoday.com/us/blog/the-digital-self/202505/llms-arent-mirrors-theyre-holograms
>>> 
>>> I know a bit about holography and holograms and have been known to use 
>>> optical metaphor for information analysis (semantic lensing and ontological 
>>> faceting) but I don't know how I feel about this characterization of LLMs.  
>>> 
>>>> Holograms Don’t Store Images, They Store Possibility
>>>> 
>>>> A hologram <https://science.howstuffworks.com/hologram.htm> doesn’t 
>>>> capture a picture. It encodes an interference pattern. Or more simply, it 
>>>> creates a map of how light interacts with an object. When illuminated 
>>>> properly, it reconstructs a three-dimensional image that appears real from 
>>>> multiple angles. Here’s the truly fascinating part: If you break that 
>>>> hologram into pieces, each fragment still contains the whole image, just 
>>>> at a lower resolution. The detail is degraded, but the structural 
>>>> integrity remains.
>>>> 
>>>> LLMs function in a curiously similar way. They don’t store knowledge as 
>>>> discrete facts or memories. Instead, they encode relationships—statistical 
>>>> patterns between words, contexts, and meanings—across a high-dimensional 
>>>> vector space. When prompted, they don’t retrieve information. They 
>>>> reconstruct it, generating language that aligns with the expected shape of 
>>>> an answer. Even from vague or incomplete input, they produce responses 
>>>> that feel coherent and often surprisingly complete. The completeness isn’t 
>>>> the result of understanding. It’s the result of well-tuned reconstruction.
>>>> 
>>> 
>>> 
>>> I do see some intuitive motivation for applying the holographic or 
>>> diffraction/reproduction through interference analogy for both LLMs 
>>> (Semantic Holograms) and Diffusion Models (Perceptual Holograms)?
>>> 
>>> I'm not very well versed in psychology but do find the whole article 
>>> compelling (though not necessarily conclusive)... others here may have 
>>> different parallax to offer?
>>> 
>>> - Steve
>>> 
>>> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / 
>>> ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>>> https://bit.ly/virtualfriam
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> archives:  5/2017 thru present 
>>> https://redfish.com/pipermail/friam_redfish.com/
>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>> 
>> 
>> --
>> Nicholas S. Thompson
>> Emeritus Professor of Psychology and Ethology
>> Clark University
>> [email protected]
>> https://wordpress.clarku.edu/nthompson
>> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / 
>> ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present 
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
> --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> 
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to