Samantha Atkins wrote:

 > Keith Elis wrote:
 >
 >> There is some unique point in the space of moral 
 >calculations where  
 >> the
 >> potential existence of billions of superintelligences outweighs the
 >> current existence of one. Not knowing where this point 
 >lies, I have to
 >> generate my best guess.
 >>
 >
 >This is like saying the the potential existence of an embryo 
 >outweighs  
 >the actual existence of the woman whose womb contains it.  

Not really. Precisely it's like saying, 'There is some unique point in
the space of moral calculations where the potential existence of
billions of people outweighs the current existence of two, where these
two are a pregnant female and a fetus.' 

I am interested to know why you believe potential existence of people
and superintelligences is not worth considering in a moral calculation.
Much of transhumanism seems directed toward enabling the future
existence of transhumans and posthumans. So, your actions' effect on the
potential existence of such beings seems important from this
perspective.     

 >It is a  
 >spurious argument about hypotheticals being of equal weight to  
 >actualities.
 > Perhaps you would like to loan me ten million on the  
 >strength of my  earnings as a hypothetical future  
 >superintelligence.  :-)   Once I am a superintelligence I 
 >will pay you  
 >back a million-fold.

Well, this is an interesting loan application. I assumed very low
probabilities for any one human transcending to superintelligence
despite the fact that there is one superintelligence already in
existence. In reality, I know of no existing superintelligences and so I
would have to assign some lower probability to your chances of
transcending into >Samantha. 

Added, this offer amounts to an application for a loan with no
amortization and a 100% balloon due at some unspecified point in the
future. For the sake of argument let's assume it's a 33 year loan, with
the first superintelligence arriving in 2040. By your offer, the balloon
due is $10 trillion. These parameters are equivalent to an investment
paying me 52% annual compound interest on principal of $10 million for
33 years (guaranteed by a potential superintelligence) for a final
portfolio value of $10 trillion in 2040. You could get better terms from
a loan shark. This loan sounds incredibly attractive to me, assuming
your credit score is good.  

However, if we assume you have a .0000001 probability of transcension in
2040 (it would probably be much lower than this if my last post was
well-calibrated), and if you don't transcend I get exactly $0 back, then
betting on you has an expected value around $1 million in 2040 dollars. 

Discounted at 3% for inflation over 33 years, that's less than $377,000
expected value in 2007 dollars. I usually avoid those deals where
$377,000 of expected value costs me $10 million to obtain.    

I don't accept your loan application. However, I would be willing to
apply for a similar loan from you. :)

 
 >> Let's assume, knowing it's possible, that the path from human to
 >> superintelligence is ridiculously hard, almost 
 >indistinguishable from
 >> impossible. Maybe each human has .0000001 probability of 
 >transcension.
 >> With 6 billion humans, that is 600 superintelligences that will
 >> eventually come to exist. Ceteris paribus, that's 600 times the
 >> intelligence and capability existing currently.
 >>
 >
 >That is not the way it works.   You can't just multiply it out like  
 >that.

I am interested in a better way to decide the question.    

Keith


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to