Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity.  My question is about what role humans will
play in this.  For the last 100,000 years, humans have been the most
intelligent creatures on Earth.  Our reign will end in a few decades.

Who is happier?  You, an illiterate medieval servant, or a frog in a swamp? 
This is a different question than asking what you would rather be.  I mean
happiness as measured by an objective test, such as suicide rate.  Are you
happier than a slave who does not know her brain is a computer, or the frog
that does not know it will die?  Why is depression and suicide so prevalent in
humans in advanced countries and so rare in animals?

Does it even make sense to ask if AGI is friendly or not?  Either way, humans
will be simple, predictable creatures under their control.  Consider how the
lives of dogs and cats have changed in the presence of benevolent humans, or
cows and chickens given malevolent humans.  Dogs are confined, well fed,
protected from predators, and bred for desirable traits such as a gentle
disposition.  Chickens are confined, well fed, protected from predators, and
bred for desirable traits such as being plump and tender.  Are dogs happier
than chickens?  Are they happier now than in the wild?  Suppose that dogs and
chickens in the wild could decide whether to allow humans to exist.  What
would they do?

What motivates humans, given our total ignorance, to give up our position at
the top of the food chain?




--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> 
> This is a perfect example of how one person comes up with some positive, 
> constructive ideas ........ and then someone else waltzes right in, pays 
> no attention to the actual arguments, pays no attention to the relative 
> probability of different outcomes, but just snears at the whole idea 
> with a "Yeah, but what if everything goes wrong, huh?  What if 
> Frankenstein turns up? Huh? Huh?" comment.
> 
> Happens every time.
> 
> 
> Richard Loosemore
> 
> 
> 
> 
> 
> 
> 
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > 
> > <snip post-singularity utopia>
> > 
> > Let's assume for the moment that the very first AI is safe and friendly,
> and
> > not an intelligent worm bent on swallowing the Internet.  And let's also
> > assume that once this SAFAI starts self improving, that it quickly
> advances to
> > the point where it is able to circumvent all the security we had in place
> to
> > protect against intelligent worms and quash any competing AI projects. 
> And
> > let's assume that its top level goals of altruism to humans remains stable
> > after massive gains of intelligence, in spite of known defects in the
> original
> > human model of ethics (e.g.
> http://en.wikipedia.org/wiki/Milgram_experiment
> > and http://en.wikipedia.org/wiki/Stanford_prison_experiment ).  We will
> ignore
> > for now the fact that any goal other than reproduction and acquisition of
> > resources is unstable among competing, self improving agents.
> > 
> > Humans now have to accept that their brains are simple computers with (to
> the
> > SAFAI) completely predictable behavior.  You do not have to ask for what
> you
> > want.  It knows.
> > 
> > You want pleasure?  An electrode to the nucleus accumbens will keep you
> happy.
> > 
> > You want to live forever?  The SAFAI already has a copy of your memories. 
> Or
> > something close.  Your upload won't know the difference.
> > 
> > You want a 10,000 room mansion and super powers?  The SAFAI can simulate
> it
> > for you.  No need to waste actual materials.
> > 
> > Life is boring?  How about if the SAFAI reprograms your motivational
> system so
> > that you find staring at the wall to be forever exciting?
> > 
> > You want knowledge?  Did you know that consciousness and free will don't
> > exist?  That the universe is already a simulation?  Of course not.  Your
> brain
> > is hard wired to be unable to believe these things.  Just a second, I will
> > reprogram it.
> > 
> > What?  You don't want this?  OK, I will turn myself off.
> > 
> > Or maybe not.
> > 
> > 
> > 
> > -- Matt Mahoney, [EMAIL PROTECTED]
> > 
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> > 
> > 
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57531803-d4a3fe

Reply via email to