Marcus shared:

A third option is to think harder about what moral respect really means.  Bender seemed to not think very deeply when it came to animals.

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

OMG! <a contracted-text contemporary pop-culture utterance>  this article is deep, broad, dense and rich IMO <also a ctcpcu>.

As I read through it (I started to skim the intro, maybe the summary, maybe a few words or quotes that stood out, but after the first paragraph or two got drawn into a full line-by-line read) I was drawn into what feels like the story behind the story.  Still somewhat superficial in detail, but a level of detail more complete (depth and breadth) than what I have encountered casually.   Folks here probably know a great deal of the referents this article uses but much of it was very new to me.

I found at least a dozen paragraphs/passages/points-made that seemed highly relevant to FriAM threads of (mostly recent in non-internet time) yore:

 * DaveW's complaints about the machine-metaphor for (human?)
   consciousness (or mind?)
 * GlenR's assertions about (and please correct me because I am wrong)
   about the illusion that communication exists
 * The analogy of AI artifacts as "counterfeit people" opposite how we
   treat counterfeit money and how perhaps News Talking Heads (see
   assertions of "fake news" and the recent spate of exposures of
   Faux's deep duplicity in these matters), and politicians (notably
   George Santos, but many other variants) are already "counterfeit
   people".
 * Repeated references to the implications of AI artifacts not being
   about the artifacts (and their actual abilities) but our individual
   and collective reactions to them, and the "world we create"
   (referencing intersubjectivity) in response.
 * As a meta-point to the last, one quote was
     o /“From here on out, the safe use of artificial intelligence
       requires demystifying the human condition,” Joanna Bryson/
 * Arguments against "Artificial People" parallel the arguments about
   Corporations (not) being People for legal/regulatory/free-speech
   purposes.
 * Are AI artifacts tools or peers?  The Valorization of fooling people
   elevates the latter (to a fault) while undermining the former.
 * The personhood argument confounds/conflates (for better/worse) the
   same arguments around how we relate to "lesser" animals and
   presumably inanimate objects (rivers, forests, oceans, the biosphere).
 * As we (might) try to make AI artifacts accountable/responsible we
   might be hitting a conflation with old models of human slavery and
   chattelization of subgroups.   e.g. Shadow/fractional-status, etc.
 * And a pointed quote for the Transhumanist/Singularians among us I
   understand why you might want to deny/escape/transcend this:
     o /"//It’s hard being a human. You lose people you love. You
       suffer and yearn. Your body breaks down. You want things — you
       want people — you can’t control."/
 * This last quote was followed by some pointed points about narcissism.//
 * /blah blah blah, ad nauseum
   /
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to