On Wed, Feb 19, 2003 at 06:37:21PM -0500, Eliezer S. Yudkowsky wrote:
> Similarity in this case may be (formally) emergent, in the sense that a 
> most or all plausible initial conditions for a bootstrapping 
> superintelligence - even extremely exotic conditions like the birth of a 
> Friendly AI - exhibit convergence to decision processes that are 
> correlated with each other with respect to the oneshot PD.  If you have 
> sufficient evidence that the other entity is a "superintelligence", that 
> alone may be sufficient correlation.

I'm not seeing why this is true. Can you walk through the math for
me, and make your assumptions formal and explicit? 

In the case where there is a significant probability that the other player
is running the EXACT same decision algorithm as you, I can see how it
works out. P(other player cooperates | I cooperate) >= P(other player is
running the exact same decision algorithm as me) is true because P(other
player defects AND I cooperate AND other player is running the exact same
decision algorithm as me) = 0. (Actually, now that I wrote out the formal 
assumption, I think there may be a problem with this argument. But before 
I work out the full details, can you confirm that this is what you have in 
mind?)

So how do you derive the conclusion that this conditional probability is
large when there is only a small probability that the other player is
running the exact same algorithm as you? It must have something to do with 
this "correlation" that you talk about, but I'm not sure what it means 
in a formal sense.

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to