--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 02/07/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
> 
> > > For
> > > example, if it has as its most important goal
> > > obeying the commands of
> > > humans, that's what it will do.
> >
> > Yup. For example, if a human said "I want a
> banana",
> > the fastest way for the AGI to get the human
> banana
> > may be to detonate a kilogram of RDX, launching
> the
> > banana into the human at Mach 7. This is clearly
> not
> > going to work.
> 
> Are you suggesting that the AI won't be smart enough
> to understand
> what people mean when they ask for a banana?

It's not a question of intelligence- it's a question
of selecting a human-friendly target in a huge space
of possibilities. Why should the AGI care what you
"meant"? You asked for a banana, and you got a banana,
so what's the problem?

> > > It won't try to find
> > > some way out of
> > > it, because that assumes it has some other goal
> > > which trumps obeying
> > > humans.
> >
> > The AGI will follow its goals; the problem isn't
> that
> > it will seek some way to avoid following its
> goals,
> > but that its goals do not match up exactly with
> the
> > enormously complicated range of human desires.
> 
> You would think that an AI could have a go at
> solving the problem of
> how a human can specify goals so that an AI does
> what is required.

For a possible problem space of n bits, there are 2^n
possible problems that can be solved. If the
complexity of human goals is one thousand bits (a
ridiculously low estimate), then there are
10,715,086,071,862,673,209,484,250,490,600,018,105,614,048,117,055,336,074,437,503,883,703,510,511,249,361,224,931,983,788,156,958,581,275,946,729,175,531,468,251,871,452,856,923,140,435,984,577,574,698,574,803,934,567,774,824,230,985,421,074,605,062,371,141,877,954,182,153,046,474,983,581,941,267,398,767,559,165,543,946,077,062,914,571,196,477,686,542,167,660,429,831,652,624,386,837,205,668,069,375
(whew!) possible other problems that the AGI could
solve that are simpler to state than the problem of
"figuring out what human goals are". Quite clearly the
AGI is never going to solve this problem unless we're
very, very, very careful about building it; an AGI
isn't just going to solve this problem by guessing or
dumb luck because it will never single out this
problem in the huge space of possibilities.

> Nothing is ever guaranteed, but we cope with the
> potential problem of
> humans controlling dangerous technologies, and,
> humans can be pretty
> erratic.

Eventually, if we keep developing more and more
dangerous technologies, we are guaranteed to blow
ourselves up. We have not yet come up with a solution
which can last for a billion years. Fifty, a hundred,
maybe, but not a billion.

> > > If it is forced to randomly change its goals
> > > at regular
> > > intervals then it might become disobedient, but
> not
> > > otherwise.
> >
> > How would you force a superintelligent AGI to
> change
> > its goals?
> 
> If its top level goal is to allow its other goals to
> vary randomly,
> then evolution will favour those AI's which decide
> to spread and
> multiply, perhaps consuming humans in the process.

What? Why would an AGI whose goals are chosen
completely at random even bother with
self-replication? Self-replication is a rather
unlikely thing for an AGI to do; the series of actions
required to self-replicate are complex enough to make
it very unlikely that an AGI will do them just by
sheer chance.

> Building an AI like
> this would be like building a bomb and letting it
> decide when and
> where to go off.

No, it would be like building a bomb and then promptly
pushing the trigger. Foom, we're dead.

> 
> -- 
> Stathis Papaioannou
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


      
____________________________________________________________________________________
Luggage? GPS? Comic books? 
Check out fitting gifts for grads at Yahoo! Search
http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to