All these deontological values alignment 
[thingamagjigs](https://en.wiktionary.org/wiki/thingamajig) irritate me. I ran 
across [this one](https://gometa.substack.com/p/the-teilhard-test) just this 
morning. And while I like it better than Kass' (at least as you've presented it 
here), it's still irritating. They all seem to ignore Hume's guillotine. At 
least the experimental games (e.g. 
[KPR](https://doi.org/10.1007/978-88-470-0665-2_18)) provide a logical bottom 
turtle from which such heuristics can be *derived* rather than mandated from 
above. Unless one is steeped in one or other spiritual traditions, these value 
sets beg for red teams to defeat them, to use them for rhetorical cover while 
competing for the #1 spot on the exploitation leader board.

On 4/15/26 3:41 AM, Prof David West wrote:
Zack Cass, /The New Ren*AI*ssance/, poses five questions that must be answered 
negatively in order for a task/job to be automated (replaced with AI).

1- does it enhance human agency?
2- does it deepen trust and connection?
3-does it sharpen human judgement?
4-does it expand collective opportunity?
5-can outcomes be audited and/or reversed?

It seems to me, based on fifty years working in business IT development, that 
several hundred thousand developers/software engineers should be replaced with 
AI and automated out of existence.



--
¡sıɹƎ ןıɐH ⊥ ɐןןǝdoɹ ǝ uǝןƃ
ὅτε oi μὲν ἄλλοι κύνες τοὺς ἐχϑροὺς δάκνουσιν, ἐγὰ δὲ τοὺς φίλους, ἵνα σώσω.


.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to