mar...@snoutfarm.com wrote at 08/20/2013 09:47 AM:
Some further distinctions:

TT1 has a trivial prescriptive form:  Employees receive guidance, and
trusted employees are just those that comply with it. Or citizens learn the
laws, and follow them.

Ideally, yes.  But practice is never ideal. (<grin>Nothing is ever ideal... Ideals are 
dangerous fictions.</grin>) I've almost always found it to be the case that guidance and laws 
are subject to interpretation.  And even if they're not, there's still the problem of measurement 
(did he comply or didn't he).  And there's also the impact of situation, special cases.  Even 
further, there is the problem if consequences.  Let's say Tim and Joan break the same law, but Tim 
gets probation and Joan gets the max penalty.  In all three (measurement, specialty, and sentencing), 
there's an implication that "distance from a Truth" is not well defined.

A remark about TT3 relates to your criticism of (non-prescriptive)
universality in TT1.  Putting on my software hat, I think of this as "diff
reading".   By that I mean I observe a set of code changes from someone
else and relate it to their stated or expected intentions.  If they make
sense, or solve the problem in a clever way, I've learned something and
gained confidence.   If they are clumsy, inappropriate in context,
internally inconsistent, inelegant, than I am less eager to read them in
the future, and have less confidence.

Excellent!  You've given us a nice set of bounding concepts from which we might define a 
Truth {clever, consistent, elegant, purposeful/non-clumsy, appropriate-to-context}.  The 
question is whether or not this set of ascriptors can lead to something transpersonal.  I 
assume most people would say "yes", since it's an oft-invoked set.  But do they 
lead to some thing that's robust (if not True)?  E.g. some thing that may seem inelegant 
in one century may seem elegant in another century.

Ideally, I will claim that these ascriptors _fail_ to lead to a transpersonal 
or robust thing we might call True.  But, practically, I obviously agree.  
Otherwise, I wouldn't waste my time learning things like Satanism.

Regarding TT4 (introduced notation for empathetic trust), perhaps it can be
distinguished by left brain vs. right brain.  It feels good so keeping
doing it.  Betrayal occurs simply because there is no way to quantify the
trust; it's not governed by reason and so psychological exposure is higher.

I don't have a good handle on the left/right brain distinction.  I normally 
translate it into something like uni- vs. multi-dimensional, singular vs. 
systemic, etc.  To some extent that further translates to thought vs. feelings. 
 If that's the case, then it might be possible to quantify it, just not in a 
simple way.  It will take a complex model, probably with hidden states as well 
as interactive aspects, but at least multiple inputs and outputs.

And if that's the case, then the qualitative difference Steve sees might reduce 
to some measure of complexity, some irreducible logical depth.

I'd also introduce other sort of trust:  investment risk reduction, or TT5.
e.g. institution of marriage/child-bearing, shared secret or stigmatized
behaviors, e.g. historically the LGBT community, criminal enterprises,
intelligence community, and so on.

I don't understand.  Do you mean positive trust, e.g. I trust in the criminal 
enterprise so I will invest?  Or do you mean a kind of negative trust, e.g. the 
LGBT community is not strong/prominent enough, so I'll remain in the closet?  
Or perhaps both?

--
⇒⇐ glen e. p. ropella
Now the water's rushing in up through the planks made out of skin
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to