Exactly. That's Chris' basic argument. Even his point about fscking Twitter. To argue that it's 
"running just as well as it did" seems a bit discordant. But at some altitude, he's 
right. It's just as much of a toxic wasteland as it was before. Every time it crosses my gaze, 
I wonder why people still use it ... or bluesky, or reddit, or <arbitrary-tag>.

My experiments with Cline have just about ended. I've decided to avoid it. It works 
great with Claude (and some others), but not with gpt-oss or codestral. Both of 
those work fine if *I* manage the prompting. Chris also mentions linux, which I've 
been using as my daily driver since ~1995 (?) ... IDK, maybe I was mostly using 
ultrix & minix in '95. But sporadically, as with the Windows 11 update breaking 
recovery, all the dorks get all riled up and talk about linux finally being ready 
for the desktop. [sigh]

The fact is that we're no smarter than rats or birds who'll fill our nests with 
whatever stupid little shiny thing the capitalists bother to hype.

On 11/17/25 8:21 AM, Marcus Daniels wrote:
The first example that comes to mind are malls.  Malls aren't needed now so 
many of them are closing.
Some people see something bad about that.   I see something good about that:  
Ruts get erased and new opportunities arise.    Power gets redistributed.
We're between cycles of exploitation and exploration, and there's some 
adaptation that is required.

-----Original Message-----
From: Friam <[email protected]> On Behalf Of glen
Sent: Monday, November 17, 2025 8:02 AM
To: [email protected]
Subject: Re: [FRIAM] Fwd: The coming AI crash-worse than the Dot Com stock 
collapse?

But Chris' argument isn't really about AI. Chris is as guilty of preemptive 
registration as the others. Short-term markets distort everything. The task is 
to free up the terms coercively bound by the grifters and marketeers. Once the 
terms are unbound, we can discover which formalisms fit and which don't.

If we're charitable, Chris is right that *something* is amiss. The disagreement 
is about *what* is amiss. Same as it Ever Was.

On 11/17/25 7:45 AM, Marcus Daniels wrote:
But what ARE these investments right now?   It seems to me they are well 
established companies:  Microsoft, Amazon, Meta, and Google.  NVIDIA has 
existed and will exist should AI revenue dry up, just like it outlasted 
Ethereum mining.

The new players aren’t yet public companies.   OpenAI has a longer path to 
profitability, but Anthropic (technical users) is already making good progress 
@ $7B.   AI has already penetrated education and will likely spread more.  
People will become dependent on it like they are dependent on cars.

*From: *Friam <[email protected]> on behalf of Prof David West 
<[email protected]>
*Date: *Monday, November 17, 2025 at 5:14 AM
*To: *[email protected] <[email protected]>
*Subject: *Re: [FRIAM] Fwd: The coming AI crash-worse than the Dot Com stock 
collapse?

Marcus and Jon are not incorrect. I do see a problem that they do not, the fact that the 
vast majority of users/adopters of AI are dramatically less technologically compentent 
than either of these gentlemen. The manager that is positive that AI will eliminate most 
if not all of his human employees, the student using an LLM to "cheat," the 
social media addicts taken with the latest AI fad bots, etc. etc. almost certainly will 
become disillusioned and turn away from AI.

Perhaps more importantly, all the capitalists who see immediate—not long 
term—return on investment are not going to remain invested. Lot's of other 
peoples money will be lost as a byproduct.

The market of Jon Marcuses is not large enough to sustain all the current 
investment. Maybe one large AI company will survive (ala Amazon that lost tons 
of money for a long long time).

davew

On Sun, Nov 16, 2025, at 3:51 PM, Jon Zingale wrote:

     I mostly agree with Marcus' sentiment. The dot com analogy may be apt, but 
it also smells too easy an analog. I find the K-shaped AI adoption to be 
bizarre. Personally, I do not believe LLMs, nor any particular architecture, to 
be the be-all-end-all. I suspect we will see a transition away from throwing 
money at developing the most general form and a move toward more idiosyncratic 
instantiations. For instance, I continue to think that Deepmind did meaningful 
work going the RL path with AlphaGo/Atari games and it has yet to come to my 
attention what happens when Transformers attempt to replicate these successes. 
Almost every LLM I have met is really really bad at go. This said, AI in their 
current form, and from this perspective, has been here for a decade. Some have 
adopted it and use it to surprising effect, others treat LLMs as nothing more 
than a robust database querying language. What people do with it and how they 
perceive it will undoubtedly have an impact. In the
     meantime, I am excited to see what happens as programmers learn to use 
formal type theories as pidgins and LLMs become more amenable to 
compositionality.


--
¡sıɹƎ ןıɐH ⊥ ɐןןǝdoɹ ǝ uǝןƃ
ὅτε oi μὲν ἄλλοι κύνες τοὺς ἐχϑροὺς δάκνουσιν, ἐγὰ δὲ τοὺς φίλους, ἵνα σώσω.

.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to