I'll respond to your particulars below.  But I think they're a bit distracting.  If we 
indulge in a little essentialism, the proto-hypothesis is that equality-inducing 
instincts (like empathy or emotions of "justice") are mechanisms for optimizing 
living systems' effort/power.  That's the hypothesis that needs to be formulated.

I was interested to see a spate of articles this morning about this book: 
http://press.princeton.edu/titles/10921.html and some other "experts" opinions 
that violence isn't necessary.  The intersection with this conversation (in my own 
fantasies, anyway) is obvious.  Violence requires abstraction and objectification of the 
victim, whereas more subtle remedies to inequality require the more complex mechanisms 
like empathy or a sense of justice.  It should be obvious that, say, a nuclear war 
reduces our efficacy, at least in the short term.  It's reasonable to think that we have 
to go through sporadic catastrophes in order to find a more global optimum.  But just 
because that's reasonable, doesn't mean it's inevitable.

On 12/06/2016 06:51 PM, Marcus Daniels wrote:
If we assume that there will be a distribution of productivities for each 
person adding to the group, how does the group estimate how at what rate to 
tolerate low productivity vs. high productivity additions to the group?  For an 
average member of a group (or the whole group) how do existing group members 
prevent potentially more productive candidates from displacing them?

These questions are founded on the assumption that the produce is a _simple_ 
thing.  Not only does it assume the measures of productivities is simple, but 
that the thing(s) being measured are also simple.  It's possible that's not the 
case.  If we allow that the produce can be complex, then the methods for 
tolerating rates of high or low productivities depend fundamentally on the mesh 
of products.  It's reasonable for, say, a scientist to work their entire 
lifetime on cheap fusion power and die without having made much progress.  But 
as long as their work can be learned from, the contemporary and future 
communities of scientists can still call that productive.

In the end, we have to resort to the minorities _telling_ the majority what is or is not 
"just".  If the fusion community tells us that Sally is a jerk to the fusion 
community and shouldn't be tolerated, we have to give that some credence.  That's social 
justice and it's one mechanism for truth seeking.

Sure, one could make a simulation of all this, or apply game theory.    I don't 
think that gets at a fundamental question which is why should any selfish agent 
care if the biosphere is effective?

Right.  A simulation can, however, help a non-empathetic person empathize, 
though.  This is the great promise of video/VR games.  We now have an 
extraordinary power to put people in the shoes of others.  And if such 
experiences can lead to a broader sense of ecology, then it will get at that 
fundamental question.

Perhaps there is really nothing to know -- just vote, fight, compete, etc. as 
appropriate for prevailing social (dis)order.

It's interesting to put it that way: there is nothing to _know_.  The 
foundation of social justice is not entirely about facts or knowledge.  It's 
about feelings.  And feelings, emotions are one mechanism for data fusion, a 
collapse of lots of heterogeneous signals down into a single measure (disgust, 
love, etc.).  These can be used as tools.  They hypothesis relies on the idea 
that they _are_ used as tools, already, by at least animals to make decisions.  
Social justice, the feelings we feel when, say, Richard Spencer speaks at one's 
alma mater, is just an extension of that.

Even given the goal of omniscience and omnipotence and an ever-increasing 
ambition for harder problems,  it still isn't clear that every agent is useful. 
 Some agents may consume more resources than they contribute.    Or just from a 
light cone type of argument it can cost more to send a message, do a 
calculation, and return a result,  than doing it within a smaller network.   
From the pro- social justice perspective, one might argue that it is just too 
difficult to anticipate what constitutes `fit' behavior, so everyone must be 
supported.

Well, I think there are both positive and negative sides of social justice.  It 
seems clear that people like Richard Spencer and Curtis Yarvin are attacked by 
the SJWs because the SJWs feel those people are not useful and should be 
muted/ignored.  So, it's not clear that social justice is about making 
_everyone_ equal.  It is a mechanism for discriminating between the potentially 
useful and the (obviously?) useless.  FWIW, Richard Spencer is obviously 
useless ... with Yarvin, whose Urbit may well be useful to some extent, it's 
less obvious ... but we depend on the SJWs to help us navigate the turbulence.

On the other hand there sure seems to be a lot of similar individuals in the 
population.  In this `global' view,  it seems some coherent (but arbitrary) 
vision is needed to identify which hard problems to tackle and how to combine 
resources to do it.   Coherent visions tend to come from individuals or small 
groups.

Well, there's nothing about SJ that requires homogeneity or lack of diversity.  
SJ simply requests that we take the opinions of experts seriously.  E.g. if the 
fusion community claims the Germans are close to a practical reactor, then we 
should listen.  If they say Sally's ideas are quackery and should be muted, 
then we should listen.  So, SJ does precisely what you're asking it to do.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to