On 7/17/25 15:02, Jean Louis wrote:
* Lars Noodén via libreplanet-discuss <libreplanet-
[email protected]> [2025-07-16 20:11]:
On 7/16/25 13:30, Jean Louis wrote:
What you call "AI" is just new technology powered with knowledge
that gives us good outcomes, it is new computing age, and not
"intelligent" by any means. It is just computer and software. So
let's not give it too much of the importance.

There is no knowledge involved, just statistical probabilities in
those "plausible sentence generators" or "stochastical parrots".

Come on — "no knowledge involved" is quite the claim. Sure, LLMs
don’t *understand* like humans do, but dismissing them as just
“stochastic parrots” ignores what they’re actually doing:
[snip]

Way to intentionally ignore how LLMs work. Knowledge is an awareness of facts in a particular context. LLMs are not aware on any level and only have a statistical relation to context. Strings are not facts. The phrases "Plausible Sentence Generators" and "Stochastic Parrots" hit the nail on the head, though maybe the latter does not give real parrots their due.

LLMs are probability engines and the output is just a random, but grammatically correct, walk through through a pile of words. There are no facts involved in the output. There may have been some facts used for the training input but that is largely uncontrolled. LLMs can shorten, to a limited extent. They can mix and match pieces. But they can't summarize. The grammatical correctness fools a lot of people and thus the problem here is that people interpret LLM output as facts through wishful thinking.

That is more like pareidolia than reality.

We saw this before with Eliza. It's just on a larger scale now, wasting more electricity and money at previously unimaginable levels, all for no productive results.

Thus we see daily the catastrophic failure of these systems in regards to factual output.

Sure, LLMs can and do make mistakes, especially with facts
sometimes, but dismissing the whole technology because of that
overlooks how often they get things right and actually boost
productivity. Like any tool, they have limits, but calling it a
“catastrophic failure” across the board doesn’t do justice to the
real-world benefits many of us see every day.

Please name any of these benefits outside the generation and promulgation of disinformation and propaganda at scale. Right now, in the context of coding, the LLMs reduce programmer efficiency while giving the illusion of speed.

More money just make them more expensive.

Sounds like you might be a bit frustrated about the price side of things —
[snip]

It is the models themselves which are inherently broken not the scale at which they are run. Scaling the financial investments and electricity used does not improve or even change the underlying model. This LLM investment bubble reminds me a bit of the cryptocurrency bubble, specifically, Bitcoin and derivatives. The Bitcoin experiment ended in 2009 when Satoshi basically disposed of some very interesting research papers in the proverbial trash where they were fished out by scammers. Yet people have and will continue to bet on it like a digital cockroach races or football pools.

The core of the matter is that LLMs produce no useful output, and thus no savings of time or effort. In that they are inherently a waste of electricity. They also take developer time and money away from real projects which could produce actual results. Other machine learning might someday produce results. LLMs cannot, they are over and wishful thinking can inflate the bubble further but without producing anything of value.

On 7/17/25 15:22, Jean Louis wrote:
[snip]
I don't find it profound, sorry. We are in changing world, tomorrow shortly it will be very different.

And much for the worse, following the dead end technologies encompassed
by LLMs. As LLM slop is foisted onto the WWW in place of knowledge and real content, it now gets ingested and processed by other LLMs, creating a sort of ouroboros of crap.

As mentioned in the other message, those that are fooled into trying to incorporate the code which the LLMs have plagiarized on their behalf are losing the connection to the upstream projects. Not everyone using FOSS code can or will contribute to the project, but by having LLMs cut the connection, there is not even the chance for them to grow into a contributing role.

Again, name one LLM which carries the licensing and attribution forward.

/Lars

_______________________________________________
libreplanet-discuss mailing list
[email protected]
https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss

Reply via email to