You see what I'm getting at. It’s true that from a purely outcome-based 
perspective—either a problem is solved, or it isn’t—it can seem irrelevant 
whether an AI is “really” reasoning or simply following patterns. This I'll 
gladly concede to John's argument. If Einstein’s “real reasoning” and an 
AI’s “simulated reasoning” both yield a correct solution, the difference 
might appear purely philosophical. However, when we look at how that 
solution is derived—whether there’s a coherent, reusable framework or just 
a brute-force pattern assembly within a large but narrowly defined training 
distribution—we begin to see distinctions that matter. Achievements like 
superhuman board-game play and accurate protein folding often leverage 
enormous coverage in training or simulation data. This can conceal limited 
adaptability when confronted with radically unfamiliar inputs, and even if 
a large model appears to have used “higher-order abstraction,” it may not 
have produced a stable or generalizable theory—only an ad hoc solution that 
might fail under new conditions.

Ultimately, the decision to rely on a powerful AI system or on traditional 
expertise depends on context and stakes. In math competitions or research 
tasks—situations where large training sets and repeated fine-tuning capture 
nearly all plausible conditions—state-of-the-art models shine, even if 
they’re performing an advanced form of pattern matching. But in high-risk 
domains like brain surgery, we risk encountering unforeseen circumstances 
well beyond any training distribution. The inability of current AIs to 
robustly handle such scenarios—unless we invest massive resources in 
real-time fine-tuning and developer supervision—makes these systems less 
practical and can slow down urgent decision-making. In those cases, 
trusting an AI’s specialized “intelligence” is inefficient compared to a 
competent human team prepared for the unexpected. The issue is not so much 
whether the AI’s process resembles “real understanding,” but whether the 
method is general enough to accommodate time-sensitive, unpredictable 
demands beyond its prior training.

And it's not just fancy brain surgery. Domains in which this apply and the 
associated number of problems appears quite vast. Other domains off the top 
in which this robust kind of generalization is critical are Self-Driving 
Vehicles in unmapped terrains or extreme weather: Unexpected obstacles and 
conditions require reliable out-of-distribution reasoning (And I don't care 
if it's better than humans, every life counts, so no need to go there). Or 
disaster response robotics: Robots exploring collapsed buildings or 
handling hazardous materials need the ability to handle chaotic, unfamiliar 
environments. Another one would be financial trading during market crises: 
Extreme market shifts can quickly invalidate learned patterns, and 
“intuitive” AI decisions might fail without robust abstraction and loose 
billions. How about nuclear power plant operations? Rare emergency 
scenarios, if not seen in training, could lead to catastrophic outcomes 
without genuine adaptability. And of course the unfortunately perennial 
military or defense systems domains and problems: Systems encountering 
enemy strategies or technologies not in training data will need human-level 
adaptability and creative problem-solving. Especially as opponents 
increasingly employ the same systems. I'm sure the distinction is more 
relevant. But it's Holidays, so please excuse the rough nature of this 
reply.






On Wednesday, December 25, 2024 at 6:58:16 AM UTC+8 Brent Meeker wrote:

> It seems that the question revolves around whether these very smart LLM's 
> solve problems by developing a theory of the problem, something they could 
> explain as a generic method, or does the solution come with no such 
> generalized explanation...what we would call intuition in a human being.
>
> Brent
>
>
>
> On 12/24/2024 11:12 AM, John Clark wrote:
>
> On Tue, Dec 24, 2024 at 12:13 PM PGC <[email protected]> wrote:
>  
>
>> *> simulating what appears to be reasoning or problem-solving*.
>>
>
>
> *Simulating?  If Einstein was only doing "simulated" thinking when he came 
> up with General Relativity and not "real" thinking then how would things be 
> any different? It seems to me that a problem has either been solved or it 
> has not been, and simulated versus real has nothing to do with it.   *
>
>> *> For instance, an LLM solving a riddle or answering a complex question 
>> does so by leveraging patterns that mimic logical steps or dependencies, 
>> even though it lacks true understanding*
>>
>
> *It's not clear to me how you know "it lacks true understanding".  If an 
> AI can answer a question that you cannot, how can you have "true 
> understanding" of it but the AI does not?  Did Einstein have true 
> understanding of general relativity or only a simulated understanding?  *
>
> * > It feels different and "more intelligent" because this functional 
>> selection imparts a structured response that aligns with human expectations 
>> of reasoning. *
>
>
> *If the vast majority of human beings think that X is more intelligent 
> than Y then the simplest and most obvious explanation for that is that X is 
> more intelligent than Y. And I don't understand how you could say that an 
> AI it's not intelligent it's just behaving intelligently because you don't 
> like the way its mind operates, the trouble is you don't have a deep 
> understanding of how your own mind operates and even the people at OpenAI 
> only have a hazy understanding of how O3 works even though they built it.  *
>  
>
>> *this is far from genuine intelligence or reasoning. LLMs are bound by 
>> their probabilistic nature and lack the ability to generalize beyond their 
>> training data,*
>>
>
> *200 million protein structures were certainly not in any AI's training 
> data, nor were superhumanly brilliant games of Chess and GO. The same thing 
> could be said about the  Epic AI Frontier Math Test problems and the ARC 
> benchmark.*
>
> *> or generate higher-order abstractions.*
>
>
> *I do not believe it's possible to solve ANY of the problems on the Epic 
> AI Frontier Math Test, problems that even world-class mathematicians find 
> to be very difficult, without the ability to generate higher order 
> abstractions. But if I'm wrong about that then I would be astonished to 
> learn that higher order abstractions are simply not important because the 
> fact remains that, regardless of the method, the problem was solved.  *
>  *John K Clark    See what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>*  
> rrz 
>  
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv3NM6jr57u%3D2XoQCh15jsqO%3Di3xGXjEJH2BF1OJPw3EdA%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3NM6jr57u%3D2XoQCh15jsqO%3Di3xGXjEJH2BF1OJPw3EdA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/e3f2d346-d35a-44c0-a5c7-da6dc64e34a9n%40googlegroups.com.

Reply via email to