On 1/29/26 5:36 PM, Rich Pieri wrote:
The tech itself isn't anything new, nor do I think it's intrinsically evil. A "generative AI" system is a large language model run in reverse. A large language model is an image recognition system trained on bodies of words instead of bodies of images.
I think "isn't anything new" is true, but only on very narrow grounds (deep neural networks aren't new, matrix multiplication on GPUs isn't new, etc.).
But I think it too narrow a view. This recent explosion all started from a key paper written by eight Google folk—all of who have since left, I guess because Google didn't spot the significance of their work.
/Attention Is All You Need/, 2017 (https://en.wikipedia.org/wiki/Attention_Is_All_You_Need). Not easy to understand—I spent a lot of time asking an LLM about it. It turns out LLMs know a lot about LLMs. Go figure.
The "transformer" described in that paper maybe wasn't that special, except for the *teeny* tiny fact that that architectural innovation is changing the whole damn world.
It made it possible to make these chatbots, and they are capable of amazing things. No, they are not as good at specific tasks as are specialized deep networks, such as Alpha Fold (predicting protein folding), but as a general purpose gizmo there is no doubt they are extraordinary.
As for being interesting for "all the wrong reasons", you do have a solid half a point there, but there is still a lot of really interesting stuff in the other half. Including that LLMs are interesting in and of themselves. If nothing else, they are illuminating of natural intelligence in what they cannot do: They cast new light on the intellectual abilities that humans have but LLMs lack. Mental powers that dogs have but LLMs don't. Heck, insects have cognitive abilities that LLMs fundamentally do not. (Seriously. And I think looking at bugs can point out how LLMs are an architectural dead end. Even as they eat the world faster than ever did a swarm of locusts.)
Please don't take me for an AI bro who swallows the hype, I think I can be pretty devastating in my criticisms, and my previous message listed plenty of negatives. (I have more!) But just because I see evil doesn't mean LLMs are all evil nor are they useless. And no way this is boring.
Whether it is a real ancient Chinese curse or not, we *do* live in interesting times. Coming at us from so many directions…
-kb, the Kent who notes that the marquee LLMs are all trained extensively on images; he was showing Claude some GIMP output just yesterday and Claude could clearly make sense of it.
_______________________________________________ Discuss mailing list [email protected] https://lists.blu.org/mailman/listinfo/discuss
