@John 10x for reminding me of this great summary of theories and schools. (IMO 
many are  more similar than their separation suggests, as my bible The Prophets 
of the Thinking Machines ... suggests about other things as well).

Re the LLMs and their consciousness, there's another problem which is not quite 
well recognized and is addressed in a letter that I wrote to one of the 
founding researchers of the epigenetic robotics and cognitive semantics, p.87 - 
89
 
https://twenkid.com/agi/Stack-Theory-is-Fork-of-Theory-of-Universe-and-Mind-13-9-2025.pdf

The beginning:

"""
16.8.2025; see also “Man and Thinking Machine”, 2001 and the example with the 
simplest computer that outputs text and “Letters between the 18-years-old…” and 
the novel “The Truth”, regarding the concept of the “soul” (=”consciousness”, 
sentience etc.) and why, when and how humans
attribute it to some other beings, entities, phenomena.


[
The following is a quote from a letter, commenting the paper:
* The Intertwining of Bodily Experience and Language: The Continued Relevance 
of Merleau-Ponty, Jordan Zlatev, p. 41-63 https://doi.org/10.4000/hel.3373 
https://journals.openedition.org/hel/3373?lang=en which mentions the Google 
engineer who claimed in 2022 that their
LLM LamDA was conscious, he was concernet about its or “her” etc.


* Google engineer says Lamda AI system may have its own feelings, 13 June 2022, 
Chris Vallance, https://www.bbc.com/news/technology-61784011 

*Todor:* “Back to the LLMs, I don't think [that] as they are now, [they] are 
"conscious" (sentient)
and have intentions in the subjective sense of humans, also there's a deeper 
problem, which I guess
some of these engineers can't understand and don't realize. IMO a "thing" 
doesn't have to be an
LLM in order to fool someone if she wanted to be fooled; as well as when 
someone doesn't want to
be fooled, he ignores all signs and otherwise accepted evidence for somebody's 
"soul", "consciousness" or whatever - the dehumanization I mention below.
The evaluator-observer *decides* whether to attach a label of consciousness of 
the "thing", item,
"object" that she observes.
The other problem is determining *what exactly* the LLM is and why it is, why 
that's the border of
its definition - where it starts and ends, similarly with the quote about human 
consciousness and the
relation to the brain. It is a general problem; a related one are the Markov 
blankets of Friston and
the choice of a definite scale. In Active inference and in my "Theory of 
Universe and Mind" the solution is that there is not a single scale, the 
principles should be valid in all or multiple scales (...) """

Jordan discussed a related thing in 2001, related to meaning and "value" and 
self-preservation (preceding Friston's Free Energy Principle), cited in another 
appendix of The Prophets at SIGI-2025:  
https://twenkid.com/agi/Arnaudov-Is-Mortal-Computation-Required-For-Thinking-Machines-17-4-2025.pdf

Jordan Zlatev, A Hierarchy of Meaning Systems Based on Value, 10.2001 
https://www.researchgate.net/publication/2526440_A_Hierarchy_of_Meaning_Systems_Based_on_Value
  

However in this other work I reply with thoughts of a thinking machine from 
2001 with the opposite answer (similar to the spirit of Matt) - the machine may 
challenge humans' claims the same way, they can't never feel what the machine 
does or "doesn't feel"; humans have no spirit, soul, "consciousness" or 
whatever, they are just atoms, molecules, electrons or whatever else 
"just-thing". (...)

For both and for both the question: WHAT actually is it - both "consciousness" 
and the entity that is evaluated - is open. What is a human? Where is its 
boundary?

@Darko: "the illusion of consciousness" - what it is like to have an illusion 
of consciousness (to have it subjectively). If you can have "illusions" don't 
you need to be conscious already. Indeed, the "illusionism" school of 
consciousness is ridiculous. The consciousness is an illusion - to whom? To an 
entity which has no consciousness.

The LLMs and whatever systems that can be "animated" in the mind of the 
evaluator-obsever (and LLMs are whatever and wherever they are and "how-ever" 
they are factorized and segmented - see "Stack theory is yet another...") 
receive the consciousness of the evaluator-observer who interacts with them.

...
Todor Arnaudov - Tosh
*The Sacred Computer* - Thinking Machines, Creativity and Human Development
https://github.com/Twenkid/SIGI-2025 - Join the yearlong virtual conference 
"Self-Improving General Intelligence 2025" or the next one: SIGI-2026 which 
begins on 1.1.2026

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f6c809749765377-M08caba4cb6ac74fa8148ee73
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to