[FRIAM] Otra acción hostil

2017-01-22 Thread Alfredo Covaleda Vélez
La Casa Blanca de Donald Trump elimina el español de su página 'web'
http://internacional.elpais.com/internacional/2017/01/22/estados_unidos/1485105920_597756.html

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Nautilus: Investing Is More Luck Than Talent

2017-01-22 Thread Marcus Daniels
Robert writes:

“They are inherently community motivated and supported and are not the kind of 
enterprises that you will see move offshore or park their cash there to avoid 
US taxation”

I make a distinction between people that are primarily goal motivated versus 
those that are primarily community motivated.  Goal motivated folks prefer to 
talk about work and ideas.   Community motivated people talk about their 
advisors or bosses, their distinguished colleagues, the benefits and drawbacks 
of their organization, and they seek mutually-beneficial relationships, among 
other things.

A benefit of large, hierarchical organizations to the goal motivated person is 
that, provided the organization goals are aligned with that person’s goals 
(employer and employee), the intra-community power struggles can be largely 
ignored.   This is not to say the community-builder tactics aren’t also used by 
other employees, often quite effectively, but they aren’t strictly essential 
for a sufficiently productive employee.

An objection to community motivated organizations is that they are about 
keeping the community afloat, not about progressing goals separate from the 
welfare of the members of the community.  The ideal situation, in my mind, is 
to solve the welfare problem for everyone.   Then discussions could stay on 
topic and be decoupled from the needs of people.   Another objection to 
communities is that complicates satisfaction of the “What do you want from me?” 
constraint.  Now there is not just one boss, but a whole set people and 
constraints to worry about.

“From your last paragraph, if I follow, you seem to have much more hope that we 
can improve society with chemicals, gene editing, quantum computing, or with 
surgical implants than I do.  I don't think that I would want to live in such a 
society. What will emerge, if any of this is at all possible, is a super-smart 
animal with the same ratty morals and self-interest.”

There are other phenotypes to consider.  Is there an innate explanation for 
nurturing and compassion?   Genetically, what is there to maternal instinct?   
What is there to sadism?   At what point does emotional empathy become 
overwhelming and counterproductive to compassion?  Does emotional empathy 
directly lead to tribalism due to attentional limits?   These predispositions 
might be tunable too through gene therapy or epigenetic controls.

Another way is to NOT fix what is broken with all of us through, say, a very 
long FDA approval and socialization process, but to create a new community 
offshore or off planet that is designed for social sustainability given certain 
environmental limitations.

“That's a very dangerous animal, IMHO.  This is why many folks are scared of 
AI; look who's leading the pack at this technology and buying up the world's 
brain trust: Google ... one of those "enterprises" that you justifiably don't 
seem to trust. 樂

Trumpians are very dangerous animals, IMO.  A lot of the output of the AI/ML 
brain trust is open source, though.   I’m more comfortable with the Google 
world than the world of the fascists.  If things go further downhill -- more 
leaders like Trump and ultra-conservative governance, a divided Europe, and an 
ungovernable China -- I’d say there’s reason to start to think about whole new 
approaches.

Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Nautilus: Investing Is More Luck Than Talent

2017-01-22 Thread Robert Wall
Marcus,

I am not sure but you may have the wrong impression of employee-owned
worker cooperatives.  Of course, they have structure and management and
decision-making processes just like capitalist-owned companies. Even *Forbes
*think they are a good idea: If Apple Were A Worker Cooperative, Each
Employee Would Earn At Least $403K

(December 2014).  It might be a good project for you to research this more.
Check out Mondragon

in Spain, for example.  Britain under Jeremy Corbin's leadership for Labor
is floating a plan to allow employees to have first-refusal rights to
purchase companies that want to sell or move offshore.

There are all kinds of co-operative institutions with perhaps the best
example in this country being the idea, at least, of the public bank, like
the Bank of North Dakota, which may be the only one, but not sure.
Cooperatives are much more prevalent outside of this country.  There may be
an insidious reason for this, however. Nonetheless, after you do a little
research, I am sure that you will see that employee-owned
cooperatives would meet with your concept of good "enterprises."  They are
inherently community motivated and supported and are not the kind of
enterprises that you will see move offshore or park their cash there to
avoid US taxation [note: Co-ops actually pay more taxes than
capitalist-owned corporations]. Give them a second look ...

I give that names like worrying, self-reflection, doubt, analysis, and
> reading.   I believe it is practiced in a widespread way by the type 1
> thinkers that Pamela mentioned.


You might have to remind me what Type 1 Thinking is all about. I found
this--Type 1 and Type 2 thinking
--but that
isn't what I was trying to explain.  If you mean a sort of Closed-Loop
Reflect-Analyze-Act type thing, then yes; that could be something akin
to Hebbian
learning
.
The most important part is that the process-a self-administered
psychodynamic one--is implemented by the individual and not, say, a
psychologist or priest.  I don't get the *worry *or *doubt *part, though,
unless you mean that the objective is to diminish those sensations and grow
confidence. The other important part is mindfulness ... being consciously
aware as much as possible.  So much of our awake time is lived on
"automatic."  Very difficult to break out of this.

But, in this thread, I wasn't so much interested in this process at
the *individual
*level except to use it as a tangible example to define--for
Steve--what it *might
*mean at the level of society, that being the underlying exploratory thrust
of this thread. How can Hebbian learning be applied at the level of
society? At the moment, it's a rhetorical question.  I mean, what are the
synapses of a society? All I can think of is the level of a Golden-Rule
kind of *morality *manifest in its so-called zeitgeist. But if the society
is basically amoral, then those hypothetical "synapses" are weak.

We tend more to use crime and punishment as a way of strengthening these
social synapses, but it doesn't result in a positive feedback loop to the
members of that society-- many who will eventually figure out how to game
the system to their own advantage. People have to actually have faith in
the system in a way that they see something egalitarian that emerges.  We
don't have that in our society and I think the Eric Smith provided some
insight into why: Power--to have control over one's destiny--is as
unequally distributed as wealth, which Eric may argue is the result of an
imbalance between *access *and *constraint*. [*an interesting aside*:
employee-owned cooperatives tend to blunt this kind of malignancy from
growing ]. Anywho ...

>From your last paragraph, if I follow, you seem to have much more hope that
we can improve society with chemicals, gene editing, quantum computing, or
with surgical implants than I do.  I don't think that I would want to live
in such a society. What will emerge, if any of this is at all possible, is
a super-smart animal with the same ratty morals and self-interest. That's a
very dangerous animal, IMHO.  This is why many folks are scared of AI; look
who's leading the pack at this technology and buying up the world's brain
trust: Google ... one of those "enterprises" that you justifiably don't
seem to trust. 樂

Cheers


On Sat, Jan 21, 2017 at 8:35 PM, Marcus Daniels 
wrote:

> Robert writes:
>
>
>
> *< *It would be a Hebbian-oriented *mental process *by way of
> "habituating" the kind of thoughts that lead to altruism or the desired
> state. >
>
>
>
> I give that names like worrying, self-reflection, doubt, analysis,