Not surprisingly, Frank, I as a psychologist, like this rendition.
N Nick Thompson <mailto:thompnicks...@gmail.com> thompnicks...@gmail.com <https://wordpress.clarku.edu/nthompson/> https://wordpress.clarku.edu/nthompson/ From: Friam <friam-boun...@redfish.com> On Behalf Of Frank Wimberly Sent: Tuesday, August 10, 2021 1:33 PM To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com> Subject: Re: [FRIAM] Moral collapse and state failure Psychologists I know would call a person whose behavior is consistent with his self description is integrated rather than moral. "Integrated" is usually a good quality but not if someone happily describes himself in sociopathic terms. Trump is, in my non-professional opinion, an amoral, narcissistic sociopath. --- Frank C. Wimberly 140 Calle Ojo Feliz, Santa Fe, NM 87505 505 670-9918 Santa Fe, NM On Tue, Aug 10, 2021, 11:24 AM uǝlƃ ☤>$ <geprope...@gmail.com <mailto:geprope...@gmail.com> > wrote: Yeah, it was long. I only got through half of it during my workout this morning. I suppose it's right to say that the normative definition of moral would exclude Trump (or people like him). But if we stuck to your idea that a particular morality be *expressible*. (FWIW, I think the extra qualifier "independently of oneself" is redundant, at least a little. Any expression has to be at least somewhat objective ... spoken word causes air vibrations, video recordings of someone talking, written documents, etc.) So, there's a hot debate at the moment in machine learning about the different usage patterns for interpretable ML vs explainable ML, whereas "explainable" is weaker in that it doesn't give any direct access to the mechanism, only describes it somewhat ... "simulates" it. Interpretable ML is supposedly a kind of transparency so that you can see inside, have access to the actual mechanism that executes when the algorithm makes a prediction. Targeting your idea that a moral code must be expressible, do you mean a perfect, transparent expression of the mechanism a moral actor uses? Or do you mean simulable ... such that we can build relatively high fidelity *models* of the mechanism inside the actor? On 8/10/21 10:11 AM, Russ Abbott wrote: > The Envy video looked like a lot of fun, but it was too long for me to sit > through it. > > Regarding morality, my guess is that it's not predictability that leads > people to consider someone moral, it's acting according to a framework that > can be expressed independently of oneself. Society-wide utilitarianism would > be fine; "someone much like Trump [who] says they're an exploitative, gaming, > solipsist" and then behaves in a way consistent with that description, would > not be considered moral no matter how consistently their behavior simply > optimized short-term personal benefits. After all, to take your own Trump > example, I doubt that many people would characterize Trump as moral. -- ☤>$ uǝlƃ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam <http://bit.ly/virtualfriam> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: http://friam.471366.n2.nabble.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: http://friam.471366.n2.nabble.com/