* Aaron Wolf <[email protected]> [2025-07-18 00:53]:
> It's unhelpful to limit AI discussions to the most basic understanding of
> LLMs. There are AI models that use reasoning rather than simply LLM
> approaches. LLMs have limitations, and AI development is already passing
> those by going beyond the approach of LLMs.

Reasoning can be seen as a form of preparation — the model is trained
on a lot of data, and it learns to associate ideas, concepts, and
structures in a way that can make its outputs seem "reasonable" or
"logical."

You can enable even non-reasoning models to perform reasoning tasks by
first prompting them to "think" through and explain how they would
approach solving a problem. Then, using their "reasoning"d" process as
input in a subsequent prompt can guide them to arrive at the correct
solution. This two-step approach effectively transforms a model into a
reasoning agent by leveraging its ability to generate explanations and
follow logical steps.

> Humans do predictable patterns too.

Yes, it's almost amusing — as if it's a big revelation that humans
also follow patterns. Of course we do. That’s why language models can
even exist in the first place — because human language, thought, and
behavior are patterned enough to be modeled.

Human predictability is what makes LLMs possible. So while LLMs didn’t
“discover” it, their success confirms it.

Yes, humans are predictable — that’s not a breakthrough, that’s a
prerequisite for language, culture, and now for large language
models. So pointing out LLMs are predictable and then marveling that
humans are too? That’s not discovery — it’s reverse-engineering the
obvious.

> AIs today are neural-networks, and even though they are a different sort of
> neural net than our brains, the comparison holds up pretty well in lots of
> ways.

That human have interconnected net of neurons, doesn't put that
neural network in the same category as the Large Language Model (LLM)
have it.

You can generalize things, though generalization doesn't prove
anything.

And do you maybe have something practical beside killing your time?

> There's nothing utterly specially magical about how I'm typing this.
> My brain has a whole pattern of putting language together and responding to
> inputs.

That is some kind of generalized statement, kind of from old times,
but that is not proven thing.

While it may seem that the brain has a pattern of assembling language
and responding to inputs, this view simplifies a much deeper and still
unresolved issue. There is no definitive evidence that memory — or the
full scope of the mind — is located only in the brain. In fact,
studies in consciousness, near-death experiences, and embodied
cognition suggest that mind and memory may extend beyond just neural
activity, and the question remains scientifically open.

And just like that, the idea that biological neural networks of
neurons are comparable to the artificial networks used in Large
Language Models sinks beneath the surface — oversimplified and
ultimately flawed.

> What we can say about AI is that it is *unlike* humans, it isn't human.

Of course AI isn't human — that's the whole point. It doesn’t think,
it doesn’t remember like we do, and it certainly doesn’t *mean*
anything when it speaks. It’s a prediction engine, not a
person. Comparing it to a human because it produces language is like
comparing a wind-up toy to a living creature — they may move
similarly, but one is alive and the other is just responding to
gears. AI mimics our surface — not our depth.

> But we won't be right in saying that it is anything like a
> simplistic word-by-word prediction algorithm. That is just one
> aspect of many features AIs can have today.

True. Modern LLMs predict text token-by-token, but their context
handling, layered architecture, and training on vast data give them
complex behavior beyond simple guessing.

> And unlike Eliza, there's no simple program, there's an *evolved*
> neural net

Mostly true. LLMs are large, complex models trained via massive data
and optimization. They are not explicitly programmed with rules like
Eliza but are “grown” through training. However, calling it “evolved”
is metaphorical — it’s trained, not biologically evolved.

> that we can't really inspect in human-programming terms.

Correct. Neural nets are often called “black boxes” due to their
complexity and opacity.

> We *raise* AI, *parent* it, *grow* it, rather than program it.

This is metaphorical but meaningful. Training AI is more about data
and optimization than direct programming.

> Just don't take this to mean that it's alive or conscious, we have
> no reason to say that.

Absolutely true. There’s no scientific evidence for AI consciousness.

Did anyone on this list claim there is?

> But it's not like other programs, it's categorically different.

Partially true — modern AI programs differ substantially from
traditional, rule-based software, but they’re still software running
on hardware.

-- 
Jean Louis

_______________________________________________
libreplanet-discuss mailing list
[email protected]
https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss

Reply via email to