Easy Richard, Matt's ideas, as well as yours, are thought provoking and
worth serious consideration.

Andrew Yost 



-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 24, 2007 9:33 AM
To: singularity@v2.listbox.com
Subject: Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]


This is a perfect example of how one person comes up with some positive,
constructive ideas ........ and then someone else waltzes right in, pays
no attention to the actual arguments, pays no attention to the relative
probability of different outcomes, but just snears at the whole idea
with a "Yeah, but what if everything goes wrong, huh?  What if
Frankenstein turns up? Huh? Huh?" comment.

Happens every time.


Richard Loosemore







Matt Mahoney wrote:
> --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> 
> <snip post-singularity utopia>
> 
> Let's assume for the moment that the very first AI is safe and 
> friendly, and not an intelligent worm bent on swallowing the Internet.

> And let's also assume that once this SAFAI starts self improving, that

> it quickly advances to the point where it is able to circumvent all 
> the security we had in place to protect against intelligent worms and 
> quash any competing AI projects.  And let's assume that its top level 
> goals of altruism to humans remains stable after massive gains of 
> intelligence, in spite of known defects in the original human model of

> ethics (e.g. http://en.wikipedia.org/wiki/Milgram_experiment
> and http://en.wikipedia.org/wiki/Stanford_prison_experiment ).  We 
> will ignore for now the fact that any goal other than reproduction and

> acquisition of resources is unstable among competing, self improving
agents.
> 
> Humans now have to accept that their brains are simple computers with 
> (to the
> SAFAI) completely predictable behavior.  You do not have to ask for 
> what you want.  It knows.
> 
> You want pleasure?  An electrode to the nucleus accumbens will keep
you happy.
> 
> You want to live forever?  The SAFAI already has a copy of your 
> memories.  Or something close.  Your upload won't know the difference.
> 
> You want a 10,000 room mansion and super powers?  The SAFAI can 
> simulate it for you.  No need to waste actual materials.
> 
> Life is boring?  How about if the SAFAI reprograms your motivational 
> system so that you find staring at the wall to be forever exciting?
> 
> You want knowledge?  Did you know that consciousness and free will 
> don't exist?  That the universe is already a simulation?  Of course 
> not.  Your brain is hard wired to be unable to believe these things.  
> Just a second, I will reprogram it.
> 
> What?  You don't want this?  OK, I will turn myself off.
> 
> Or maybe not.
> 
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email To 
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
> 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email To
unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;
c

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57126814-a085ef

Reply via email to