--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> The AGI is going to have to embed itself into some
> organizational
> bureaucracy in order to survive.  It'll appear
> friendly to individual humans
> but to society it will need to get itself fed, kind
> of like a queen ant, and
> we are the worker ants all feeding it.

What?! Why even bother? Humans have managed to feed
themselves, to more than feed themselves (look at all
this civilization stuff we've built!), and you think
an AGI ten thousand times smarter than us is going to
need to rely on us for basic resources?! After all,
humans still rely on the chimps to fetch our dinner.
Riiight.

> Eventually
> it will become
> indispensible.  If an individual human rebels
> against it - like someone
> rebelling against IRS computers, good luck.  Once it
> is embedded it ain't
> going away except for newer and better versions. 
> And then different
> bureaucracies will have their own embedded AGI's all
> vying for control.  But
> without some sort of economic feeding base the AGI's
> won't embed they'll
> wane... it's a matter of survival.

Why wouldn't the AI simply take whatever it wants? If
you unleash a rogue AGI, by the time you take the five
seconds to pull the power cord, it's already gotten
out over the Internet, more than likely taken over
several nanotech and biotech labs, increased its
computing power several hundred fold, and planted
hundreds of copies of its own source code in every
writable medium should it ever get erased. In five
seconds. And that's not even a superintelligent AGI;
that's an AGI with a human intelligence level that
just thinks a few thousand times faster than we do.

> John
> 
> > -----Original Message-----
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > 
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > > Legg's paper is of no relevance to the argument
> whatsoever, because it
> > > first redefines "intelligence" as something
> else, without giving any
> > > justification for th redefinition, then proves
> theorems about the
> > > redefined meaning.  So it supports nothing in
> any discussion of the
> > > behavior of intelligent systems.  I have
> discussed this topic on a
> > > number of occasions.
> > 
> > Since everyone defines intelligence as something
> different, I picked a
> > definition where we can actually say something
> about it that doesn't
> > require
> > empirical experimentation.  What definition would
> you like to use
> > instead?
> > 
> > We would all like to build a machine smarter than
> us, yet still be able
> > to
> > predict what it will do.  I don't believe you can
> have it both ways.
> > And if
> > you can't predict what a machine will do, then you
> can't control it.  I
> > believe this is true whether you use Legg's
> definition of universal
> > intelligence or the Turing test.
> > 
> > Suppose you build a system whose top level goal is
> to act in the best
> > interest
> > of humans.  You still have to answer:
> > 
> > 1. Which humans?
> > 2. What does "best interest" mean?
> > 3. How will you prevent the system from
> reprogramming its goals, or
> > building a
> > smarter machine with different goals?
> > 4. How will you prevent the system from concluding
> that extermination of
> > the
> > human race is in our best interest?
> > 
> > Here are some scenarios in which (4) could happen.
>  The AGI concludes
> > (or is
> > programmed to believe) that what "best interest"
> means to humans is goal
> > satisfaction.  It understands how human goals like
> pain avoidance, food,
> > sleep, sex, skill development, novel stimuli such
> as art and music, etc.
> > all
> > work in our brains.  The AGI ponders how it can
> maximize collective
> > human goal
> > achievement.  Some possible solutions:
> > 
> > 1. By electrical stimulation of the nucleus
> accumbens.
> > 2. By simulating human brains in a simple
> artificial environment with a
> > known
> > solution to maximal goal achievement.
> > 3. By reprogramming the human motivational system
> to remove all goals.
> > 4. Goal achievement is a zero sum game, and
> therefore all computation
> > (including human intelligence) is irrelevant.  The
> AGI (including our
> > uploaded
> > minds) turns itself off.
> > 
> > 
> > -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________Need
 a vacation? Get great deals
to amazing places on Yahoo! Travel.
http://travel.yahoo.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to