I agree.

Of course, Lazurus was immortal (having fathered himself) and had time to learn 
all those skills. But why those skills and not a host of others?

I too am a product of RAH, having read his entire corpus multiple times. 
However, my personal heroes tended to be Jubal Harshaw, Valentine Smith, 
Bernardo de la Paz, and even Mycroft (Mike) more than Lazurus. And these 
blended well with A.E. van Vogt's heros Gilbert Gosseyn (World of Null-A) and 
Eliot Grosvenor (Voyage of the Space Beagle), and Brunner's heroes, Nick 
Hafflinger (Shockwave Rider) and Lex (Polymath). These led to my early 
dedication to "know everything and experience (at least once) everything." 
Alexei Panshin's novel, Rite of Passage and its discussion of "ordinology" and 
"synthesis" as professions was also very influential.

I am not so much a believer in human exceptionalism as I am convinced that 
there is a lot more to being human and for human potential than what is usually 
recognized. [AI advocates not only fail to recognize, but deny the 
possibility.] This is probably a result of my involvement in the Human 
Potential Movement when an undergraduate and with Mitchell's (the astronaut) 
Noetic Institute.

All of this is background to one of my consuming interests of the moment: how 
to facilitate the "education" of human beings. Educate is in quotes because it 
is a poor approximation of what I mean: a synthesis of enculturation, 
facilitated self-learning, exploration, ...  All influenced by experiments like 
Summerhill and the earlier, non-Christian-centric, Paidiea movement.

davew


On Wed, Jun 4, 2025, at 10:26 AM, steve smith wrote:
> DaveW, et alia -
>> T*he Alignment Problem*, by Brian Christian
> I would say that Christian's piece here acutely represents what I'm trying to 
> re-conceive, at least for myself.  His implications of *Human Exceptionalism* 
> and a very technocentric focus which largely avoids deeper political 
> critiques about who gets to define "alignment" and whose values are 
> prioritized.    It is a bias oft-presented by those of us who are 
> tech-focused/capable/advantaged to reduce a problem to one we think we know 
> how to solve (in a manner that promotes our narrow personal interests).
> 
> In the spirit of "anti-hubris", I was once strongly aligned with Robert 
> Heinlein's (RAH) "Human Chauvanist" or "Human Exceptionalism" perspective as 
> exhibited in his Lazarus Long (LL)  character's oft-quoted line:
> 
>> *"A human being should be able to change a diaper, plan an invasion, butcher 
>> a hog, conn a ship, design a building, write a sonnet, balance accounts, 
>> build a wall, set a bone, comfort the dying, take orders, give orders, 
>> cooperate, act alone, solve equations, analyze a new problem, pitch manure, 
>> program a computer, cook a tasty meal, fight efficiently, die gallantly.
* *Specialization is for insects."*
>> 
> I can't say I don't still endorse the optimistic aspirations inspired by LL's 
> statement, it is the "should" that I am disturbed by.   I am a fan of 
> generalism but in our modern society, acknowledge that many if not most of us 
> are in fact relatively specialized by circumstance and even by plan and while 
> we might *aspire* to develop many of the skills LL prescribes for us, it 
> should not be a source of shame or of "lesser" that we might not be as 
> broadly capable as implied.    We are a social species and while I cringe at 
> becoming (more) eusocial than we already are, I also cringe at the conceit of 
> being order 10B selfish (greedy?) individual agents with long levers, prying 
> one another out of our various happy places willy nilly.
> 
> I also think the *hubris* aspect is central.   One of the major consequences 
> of my own "origin story" foreshadowed by my over-indulgence in 
> techno-optimistic SciFi of the "good old fashioned future" style and 
> particular RAH's work was that he reinforced my Dunning-Kruger tendencies, 
> both by over-estimating my own abilities at specific tasks and narrowed my 
> values to focus on those things which I was already good at or had a natural 
> advantage with.  As a developing young person I had a larger-than average 
> physicality and a greater-than-average linguistic facility, so it was easy 
> for me to think that the myriad things that were intrinsically easier for me 
> based on those biases were somehow more "important" than those for which 
> those things might be a handicap?   I still have these biases but try to 
> calibrate for them when I can.
> 
> My first "furrin" car (73 Honda Civic) was a nightmare for me to work on 
> because my hands were too big to fit down between the gaps amongst all the 
> hoses and belts and wires that (even that early) smog-resistant epi-systems 
> layered onto a 45mpg tiny vehicle such as that.  And you are all familiar 
> with my circumloquacious style exemplified by "I know you believe you 
> understand what you think I said, but I don't think you realize that what you 
> heard was not what I meant".   While I might have been able to break a siezed 
> or rusty bolt loose on my (first car) 64-Tbird or (first truck) 68 F100 
> without undue mechanical leverage it was hell to even replace spark plugs or 
> re-attach an errant vacuum line on my Honda.   And while I might be able to 
> meet most of my HS teachers on a level playing field with complex sentence 
> constructions (or deconstructions) or logical convolutions, the same tendency 
> made me a minor pariah among some of my peers.
> 
> Back to "alignment" and AI, I would claim that human institutions and 
> bureaucracy are a proto-instantiation of AI/ML, encoding into (semi)automated 
> systems the collective will and values of a culture.  Of course, they often 
> encode (amplify) those of  an elite few (monarchy, oligarchy, etc) which 
> means that they really do present to the masses as an onerous and oppressive 
> system.   In a well functioning political (or religious) system the 
> institutional mechanisms actually faithfully represent and execute the values 
> and the intentions of those who "own" the system, so as-by-design, the better 
> it works, the more oppressed and exploited the citizenry (subjects) are.    
> We should be *very* afraid of AI/ML making this yet-more efficient at such 
> oppression and exploitation *because* we made it in our own 
> (royalty/oligarchic) image, not because it can amplify our best acts and 
> instincts (also an outcome as perhaps assumed by Pieter and Marcus and most 
> of us often-times).
> 
> I don't trust (assume) the first-order emergent "alignment" of AI (as 
> currently exemplified by LLMs presented through chatBot interfaces) to do 
> anything but amplify the existing biases that human systems (including pop 
> culture) exhibit.   Even Democracy which we hold up quite  high (not to 
> mention Free Markets, Capitalism, and even hyperConsumerism,and 
> hyperPopulism) is an abberant expression of whatever collective human good 
> might be... it tends to represent the extrema (hyper fringe, or 
> hyper-centroid) better than the full spectral distribution or any given 
> interest really.   An ill-concieved, human-exceptionalist (esp.  first world, 
> techno-enhanced, wealthy, "human-centricity") giant lever is likely to break 
> things (like the third world, non-human species, the biosphere, the climate) 
> without regard to the fact that to whatever extend we are an "apex 
> intelligence" or "apex consciousness", we are entirely stacked on top of 
> those other things we variously ignore/dismiss/revile as base/banal/unkempt.
> 
> Elno's aspiration to help (make?) us climb out of the walls of the petri-dish 
> that is Terra into that of Ares (Mars) to escape the consequences of our own 
> inability to self-regulate is the perfect example of 
> human-exceptionalist-hubris gone wrong.   Perhaps the conceit is that we can 
> literally divorce ourselves from the broad based support that a stacked 
> geo/hydro/cryo/atmo/biospheric (eco)system provides us and live entirely on 
> top of a techno-base (Asteroid mining Belter fantasies even moreso than 
> Mars/Lunar/Venus/Belter Colonists?).   ExoPlanetarian expansion is inevitable 
> for humanity (barring total premature self-destruction) but focusing as much 
> of our resources in that direction (ala Musk, especially fueled by MAGA 
> alignment in a MAGA-entrained fascist industrial-state?) as we might be on 
> the path to is it's own folly.  The DOGE-style MAGA-aligned doing so by using 
> humble humans (and all of nature?) as reaction-mass/ejecta is a moral tragedy 
> and fundamentally self-negating.   Bannon and Miller and Musk and Navarro and 
> Noem and ...  and the entire Trump clan (including Melania and Barron?) are 
> probably quite proud of that consequence, it is not "unintended at all" but I 
> suspect the average Red-Hat-too-tight folks might not be so proud of the 
> human suffering such will cause.  
> 
> Maybe those chickens (the ones not destroyed in industrial 
> egg-production-gone-wrong) are coming home to roost?  Veterans services,  
> health-care-for-the-many, rural infrastructure development, humble family 
> businesses, etc might be on the verge of failure/destruction in the name of 
> concentrating wealth in Golf Resorts, Royal  Families, and Space Adventurers 
> pockets?  Or maybe we are generally resilient to carry all of that on our 
> backs (with AI to help us orchestrate/choregraph more finely)?  Many 
> hands/heads/bodies make light work even if it is not righteous (see pyramids?)
> 
> 
> 
> Bah Humbug!
> 
> - Steve
> 
> 
> 
> 
> 
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
> --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> 
> 
> *Attachments:*
>  • OpenPGP_0xD5BAF94F88AFFA63.asc
>  • OpenPGP_signature.asc
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to