Unless, of course, that human turns out to be evil and
proceeds to use his power to create The Holocaust Part
II. Seriously- out of all the people in positions of
power, a very large number are nasty jerks who abuse
that power. I can't think of a single great world
power that has not committed atrocities. Why should we
be willing to trust in a human whose motivations are
geared primarily towards producing the largest number
of surviving grandchildren on the African savanna? If
I designed an AI that way, and proposed to give it
superintelligence, everyone would (correctly) denounce
me as an insane crackpot.

 - Tom

--- stephen white <[EMAIL PROTECTED]> wrote:

> On 31/05/2007, at 2:37 PM, Benjamin Goertzel wrote:
> > Eliezer considers my approach too risky in terms
> of the odds of  
> > accidentally creating a nasty AI; I consider his
> approach to have  
> > an overly high risk of delaying the advent of
> Friendly AI so long  
> > that some other nasty danger wrecks humanity
> before it happens...
> 
> Whereas I think both of you are nuts and that human
> augmentation will  
> happen long before we need to worry about AI's. :)
> 
> Whatever computers can do, they can be plugged into
> people's brains.  
> Your "guaranteed Friendly AI" is going to be...
> people!
> 
> (before they're processed into Soylent Green, that
> is! :)
> 
> Steve.
> 
> --
>    [EMAIL PROTECTED]
> 
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________
Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to