Tell me if this is also a superrationality-type issue:

I commented to Eliezer that, during the last panel of the conference,
I looked around for Eliezer & didn't find him, and wondered if there
was a bomb in the room.  He replied something to the effect that he
has a strong committment to ethics.

This, of course, is exactly what concerned me.  A person who is either
not very rational, or not very ethical, can be relied on to operate
within certain parameters.  A person who is committed to doing
whatever his computations direct, however, may be busy caring for
orphaned puppies one day, and then, because he changed his estimation
of some prior from 0.5 to 0.6, go out and blow up an AI conference the
next day.  (Perhaps this is part of why humans seem to have an evolved
distrust of overly smart people.)

It seems to me that there are societal inefficiencies in this
approach.  AFAIK, the Bayesian formalism doesn't consider things such
as how irreversible the effects of an action are if it turns out to be
wrong, or the advantages from cooperation if everyone biases their
actions to be more like those of others (and hence stops blowing up
everyone else's conferences).  I think that if you posited a society
of Bayesian reasoners, they would have higher total utility if they
agreed on some rules, guidelines, or values.

Perhaps the problem with violence in the Middle East is that the
combatants are overly rational.

A Bayesian reasoner might reason out, given the idea, that it is
logical to construct a society with mores and laws.  Is that an answer
to the PD superrationality problem - that the Bayesian reasoner
reasons that his utility will be maximized if everyone passes a law
that cooperation is mandatory?

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to