On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote:

The goals will be designed by humans, but the huge
prior probability against the goals leading to an AGI
that does what people want means that it takes a heck
of a lot of design effort to accomplish that.

Not as much design effort as building a super AI in the first place.

> Surely the sensible thing is to design
> it to do what I
> say and what I mean, to inform me of the
> consequences of its actions
> as far as it can predict them, to be truthful, and
> so on.

Yes. This is really, really tricky- that's the point I
was trying to make.

I just don't believe that it would be as difficult as you say to
control even a ridiculously literal-minded AI that was programmed to
simply obey human instructions, even with no other safeguards. You
could simply ask it to tell you the expected consequences of its
following your instructions before allowing it to act. People would
deliberately and accidentally use such machines to get up to no good,
but they would have to contend with a large number of other
human-directed AI's whose job is to monitor dangerous and erratic AI
behaviours. Humans manage for the most part to control the behaviour
of other humans so why wouldn't AI's do equally well controlling the
behaviour of other AI's?



--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to