On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote:

> For
> example, if it has as its most important goal
> obeying the commands of
> humans, that's what it will do.

Yup. For example, if a human said "I want a banana",
the fastest way for the AGI to get the human banana
may be to detonate a kilogram of RDX, launching the
banana into the human at Mach 7. This is clearly not
going to work.

Are you suggesting that the AI won't be smart enough to understand
what people mean when they ask for a banana?

> It won't try to find
> some way out of
> it, because that assumes it has some other goal
> which trumps obeying
> humans.

The AGI will follow its goals; the problem isn't that
it will seek some way to avoid following its goals,
but that its goals do not match up exactly with the
enormously complicated range of human desires.

You would think that an AI could have a go at solving the problem of
how a human can specify goals so that an AI does what is required.
Nothing is ever guaranteed, but we cope with the potential problem of
humans controlling dangerous technologies, and humans can be pretty
erratic.

> If it is forced to randomly change its goals
> at regular
> intervals then it might become disobedient, but not
> otherwise.

How would you force a superintelligent AGI to change
its goals?

If its top level goal is to allow its other goals to vary randomly,
then evolution will favour those AI's which decide to spread and
multiply, perhaps consuming humans in the process. Building an AI like
this would be like building a bomb and letting it decide when and
where to go off.


--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to