On 12/26/2024 8:35 PM, PGC wrote:
In short, the reason LLM outputs feel like “type 1 reasoning on steroids” is that these models have memorized so many examples that their combined intuition extends across nearly all known textual domains. But when a problem truly demands formal reasoning steps absent from their training data, LLMs lack a real “type 2” counterpart—no robust self-critique, no internal program writing, and no persistent memory to refine their logic. We can therefore liken them to formidable intuition machines without the same embedded capacity for system 2, top-down reasoning or architectural self-modification that we see in real human skill acquisition.

Of course, like humans, an AI based on an LLM could have a formal reasoning program, like Prolog, and a math program, like Maxima, appended. But then the trick will be knowing when its type 1 reasoning isn't accurate enough and switching to type 2, or knowing that type 2 isn't fast enough and switching back to type 1.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/33b6a3d3-cdd2-488d-ab02-dec8bc54f29b%40gmail.com.

Reply via email to