What "that" means in your message - what exactly is the delusion, which is 
dangerous. 
Possibly that the physical robots will come after the creative intelligence? It 
already happened - the generative media is vastly superhuman for most of what 
most humans consider or recognize or accept as "art". The robots progress fast, 
but it is already after and they will be more expensive than 0.1$ for 1M tokens 
and possibly running it locally. 
For the agility and coordination part when fast motions are needed, the best 
current robots don't use LLM,  but MPC, and the "LLMs" for robots are not "text 
models". Transformers etc. are general techniques, they are "AGI" as a 
modality-agnostic compression-prediction-generation technology, so they are 
suited for all modalities.

Whether it will "replace" humans or "jobs" depends also on political and social 
decisions and dynamics, the technology could exist, but be left as a toy, as it 
was for some technologies in the antiquity.

 Also these art forms may become obsolete, they are already "done" and 
exhausted if looked "objectively", they mostly were so without generative AI 
either, but the viewers couldn't or can't notice, don't understand it or don't 
care (most of  the Hollywood movies are trivial and ridiculous repetitive and 
predictable stamps over and over in most movies; most stories, most songs etc. 
they already were "mechanically generated" "by HAAI" (human-assisted AI, LOL), 
implemented with "lower tech" and a bit slower than now ). The most of the 
creative people in all arts are not employed in the arts where they have 
talents (in particular, how many actors are "in line" in Hollywood, working as 
a "waitress, a taxi driver" etc. - there is a limited amount of well payed 
jobs, "slots" and that's been forever* and "the humanity" didn't care. Now the 
professions of the millionaire stars seem threatened and oh, how much we care 
about the "human creativity", LOL, they'll have to sell their yachts to 
survive. 

* A saying in Bulgarian: "a musician can't support a family" etc. 

...
I myself agree about the Turing test as it is defined (it is for cheating and 
ill-defined, and actually Minsky in an interview a few years before his death 
comments that the test was a joke, it was not intended to measure true 
intelligence). LLMs are far ahead from the text-only GPT2 or GPT3 though.

Reasoning in *strict* logical domains and math is better done with actual 
virtual universe simulations (see Theory of Universe and Mind), the kinds of 
logic being forms of virtual universe simulations as well; it doesn't need LLMs 
but proper definitions, mapping and growing grounded multimodal structures, 
which actually can be done *manually* without "learning" over the whole 
Internet. I agree that many "reasoning" benchmarks of LLMs are silly (Mary had 
an apple and she gave it to John... ). (...) 

>The current ML algorithms are better suited for physical skills than they are 
>language modeling for the simple >reason

There are different kinds, also MPC.

In a sufficiently abstract and proper analysis data and instructions are in the 
same domain, "declarative" data are also instructions and vice versa. 
"Data-driven truth" is also "procedure driven" as anything "driven" has 
actuators and Will, change, it's not just data. Any data that is read or 
written somehow involves addressing, a change, it causes change somewhere else, 
directs some executing machinery etc. 

I agree about the compression-prediction framework, however "lossless" depends 
on the definition and the actual resolution of causality and control and span 
(and available resources), however the LLMs and transformers also do the 
compression-prediction part and theat line of research had progress in 
improving the compression (smaller models achieving lower perplexity, using 
less properly prepared data; compression (reduction) of the number of training 
iterations etc..)

The exact benchmarks may vary and "mindless" flat compression is not enough, it 
needs to be with a proper structure and mappings, otherwise you need to have an 
interpreter with the required structure, i.e. mapping to specific states which 
are specific kind of information/representation in the low level virtual 
universe (as in Theory of Universe and Mind); as it is with a text-only or 
unimodal LLM: it generates meaningful or plausible output from input or prompt, 
say text, but it is the evaluator - a human - which figures out what the 
contents refers to in the other context/modalities/reality and which does 
something else;  however the multimodal models with proper actuators and 
sensors and "tool-use" and agentic multi-step systems start to make the 
connections with the current LLMs as well, they are not just LLMs and the "AI 
systems" doesn't have to be only a monolithic single LLM, they could be any 
combination of anything.

Some notes discussing "why lossless compression" regarding the evaluation of  
"musical beauty" in a paper published two days ago about Music and compression, 
p.23.
https://twenkid.com/agi/Calculus-of-Art-I-Intelligence-Music-Beauty-2012-2025-Arnaudov-10-6-2025.pdf

...

*The Sacred Computer: *Visit and participate in the online year-long virtual 
conference Thinking Machines 2025/SIGI-2025:
https://github.com/twenkid/SIGI-2025

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9ff7a1b83025001c-Ma3aa3415b36142a30c16a7bf
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to