I would agree that there is a strange rationalization process by people like Brooks.
Few in leadership positions in the industry openly talk about the distillation of humanity and evolution of intelligent life. From: Friam <[email protected]> On Behalf Of Prof David West Sent: Tuesday, April 21, 2026 9:17 AM To: [email protected] Subject: Re: [FRIAM] IT software dev / coding For what it might be worth. I posted the Zack Cass questions and channeled his “should” because I found them to be a bit ironic. Unfortunately, I did not include enough background for others to see the irony. Cass is an unabashed advocate of AI and is certain that AI will bring about a new renaissance, derived from “unlimited intelligence.” His main argument: with AI taking over all the mundane drudgery of human work, people will be able to devote time an energy to creativity, social relations (definition of happiness), and deep thinking. He echoes other Anthropic executives in predicting that thousands of jobs in accounting, middle management, entry-level, bureaucracy, and IT will be displaced by AI, leading to a 10-20% unemployment rate within five years. He thinks this is a good idea because AI should be doing those jobs, not people. Humans will then be free to do all the creative and philosophical stuff. What I found ironic, is his seeming obliviousness to the qualities that people will need to possess in order to accomplish all those wondrous, new renaissance, kinds of stuff. Also, like a lot of AI advocates, he does seem to think that AGI and super intelligence will be able to supersede human abilities in the near future—a seeming, to me, contradiction. Cass does present a theme, somewhat like Steve Smith, that too much work and too many workers have been, de facto, automated already. Steve used the term “commoditized.” In that context, which in the case of IT, I totally agree with the use of the word should. Just my opinion as a someone who has been in the ‘profession’ since 1968, the same year that software engineering was posited as a profession and academic discipline, the “mechanization” of people has been and is a dominant goal. Method, CASE, Model-driven design, “intelligent software assistants (1990s), and now AI would be my “evidence.” When I first entered the profession, we were "wizards;" now we are merely imperfect engineers. Back to the irony; David Brooks wrote a NYT op ed with the title, “In the Age of AI: Major in Being Human.” Cass seems to think that being human is a given while Brooks is saying it must be pursued and nurtured. Of course, there is no such thing as a major in “being human.” davew On Mon, Apr 20, 2026, at 1:19 PM, glen wrote: > That language makes my brain hurt. But IIUC, I have a few nits to pick, > in order of their priority: > > 1. It isn't Dave's "should", not really. Dave was attempting to pump > the computation set up by Kass, playing along with it and its > implications. Even if Dave dislikes what's become of software > develop[ment|ers], he explicitly disagrees with Kass' setup. The > important point of this nit is one's ability to *play along* with > another's little game. That ability is important to game selection ... > i.e. if we don't like what *is*, what games are available to move from > what is to what should be? > > 2. You list 3 systems: systemic metabolism, distributed cognition, and > meaning. I presume meaning should have some prefix like systemic and > distributed, indicating it's higher order than the individuals who > actually _have_ the identity, [in]dignity, narrating, etc. What I'm > lacking in parsing these weird words is how each of the 3 differ or > overlap, how they relate. How can one use a word like "cognition" > without implying a fairly tight relationship to "meaning"? And if > that's the case (that there is a tight relationship), how can Kass-Dave > be super-imposing metabolism over meaning without doing the same to > cognition? > > 3. I don't think it's reasonable to use "phase transition" without some > clear concept of "phase". What was the phase before the GPT? ... or > even before the "financialize everything" of polymarket/kalshi? ... or > whatever transition you find most amenable to description in these > weird words? > > On 4/20/26 10:05 AM, Steve Smith wrote: >> In the spirit of alien-thinking: >> >> Norbert Weiner (1950) sed: /When we use humans as components in a >> machine—reducing them to repetitive, protocol-bound roles—we “chain a man to >> a thwart,” turning him into an inferior machine. / >> >> Commodification and automation are dual /operations/ that reorganize the >> relationship between affordances and competencies across a (heterarchical) >> system (of systems). >> >> In this case there are two (or more) systems superposed: The /systemic >> metabolism/ attempting to extract work and redistribute resources and the >> /"distributed cognition/" system (cultural norms, government regulations, >> religious unctions) attempting to organize elements to effect that >> extraction/redistribution? >> >> We are in (yet another) phase transition where formerly innovative/creative >> processes have become commodified to the point that they can be automated, >> which is part of a cascade of "yet more" commodification and automation. >> >> A third (and most salient to well adjusted human biengs and other sentients >> perhaps?) is that of meaning whose objective is to justify and stabilize >> participation through stablized idnetity, dignity narrative, ethics, using >> ideas like purpose fairness, value and "shoulds". >> >> To summarize: Dave's "should" seems to superpose the logic of the >> /metabolism/ (should automate to optimize efficiency) and the /meaning/ >> realm with something like "everyone would be happier and more well adjusted" >> if they were to give over to this commodification. >> >> On 4/20/26 9:16 am, glen wrote: >>> There's a reading of Dave's OP not considered in this thread. He used the >>> word "should": "several hundred thousand developers/software engineers >>> should be replaced with AI and automated out of existence". Of course, >>> there is a usage of "should" that's more of a prediction than a moral >>> imperative. E.g. "We expect several hundred ...". But I didn't read it that >>> way. I read Kass' 5 questions ([mis]informed or not) as an ethical stance. >>> And Dave extended it to imply that those developers *should* be obsoleted, >>> according to Kass' ethic. >>> >>> Granted, many others are being obsoleted. But should they? Should they >>> according to Kass' ethic? Something else like the one I forwarded? Does >>> everyone have their own persnickety set of rules? >>> >>> A practical (though perhaps cynical or even nihilist) approach is to >>> [in|ab]ductively arrive at what *should* happen based on what *is* >>> happening, rather than stumbling into axioms of occult provenance. How and >>> when to map is-should is the fundamental question, much more important than >>> whichever individual rules might be adopted. >>> >>> On 4/20/26 7:53 AM, cody dooderson wrote: >>>> I agree with Marcus. It is not just software developers that are getting >>>> replaced by AI. Book writers, musicians, lawyers and many other >>>> professions are seeing competition from AI. >>>> This article about an AI author just showed up on Hacker news, >>>> https://theamericanscholar.org/who-is-blake-whiting/ >>>> <https://theamericanscholar.org/who-is-blake-whiting/> . Apparently the >>>> books get very good reviews, but the author doesn't actually exist. >>>> I would speculate that almost any desk jockey profession is at risk. >>>> >>>> _ Cody Smith _ >>>> [email protected]<mailto:[email protected]> >>>> <mailto:[email protected]<mailto:[email protected]>> >>>> >>>> >>>> On Wed, Apr 15, 2026 at 4:27 PM Marcus Daniels >>>> <[email protected]<mailto:[email protected]> >>>> <mailto:[email protected]<mailto:[email protected]>>> wrote: >>>> >>>> Dave writes: >>>> >>>> < It seems to me, based on fifty years working in business IT >>>> development, >>>> that several hundred thousand developers/software engineers should be >>>> replaced with AI and automated out of existence. > >>>> >>>> It's a growing list, and where it is weak, it mostly just a question of >>>> getting the token expenditure high enough while providing tools and >>>> grounding / embodiment. >>>> >>>> For sysadmin work, the main obstacle is having (something like) hands >>>> to >>>> open boxes, power cycle, and that sort of thing. The other day >>>> Claude Code >>>> set up a multi-architecture Kerberos server for me with NFSv4. >>>> >>>> Traditional software development is mostly done IMO. >>>> >>>> Also, building architecture is done. Claude Opus is surprisingly >>>> skilled >>>> at building plans. That will only accelerate IMO. >> > -- > ¡sıɹƎ ןıɐH ⊥ ɐןןǝdoɹ ǝ uǝןƃ > ὅτε oi μὲν ἄλλοι κύνες τοὺς ἐχϑροὺς δάκνουσιν, ἐγὰ δὲ τοὺς φίλους, ἵνα σώσω. > > > > .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. > / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-.. > FRIAM Applied Complexity Group listserv > Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom > https://bit.ly/virtualfriam > to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com > FRIAM-COMIC http://friam-comic.blogspot.com/ > archives: 5/2017 thru present > https://redfish.com/pipermail/friam_redfish.com/ > 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-.. FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
