--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 02/07/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
> 
> > The goals will be designed by humans, but the huge
> > prior probability against the goals leading to an
> AGI
> > that does what people want means that it takes a
> heck
> > of a lot of design effort to accomplish that.
> 
> Not as much design effort as building a super AI in
> the first place.

I don't know enough about building an AGI to say
whether this is true or not.

> > > Surely the sensible thing is to design
> > > it to do what I
> > > say and what I mean, to inform me of the
> > > consequences of its actions
> > > as far as it can predict them, to be truthful,
> and
> > > so on.
> >
> > Yes. This is really, really tricky- that's the
> point I
> > was trying to make.
> 
> I just don't believe that it would be as difficult
> as you say to
> control even a ridiculously literal-minded AI that
> was programmed to
> simply obey human instructions, even with no other
> safeguards.

"Control" is not the issue. Whether the AGI winds up
doing nasty things is the issue.

> You
> could simply ask it to tell you the expected
> consequences of its
> following your instructions before allowing it to
> act.

There are too many possible consequences of any given
action to list concisely. The AGI may wind up babbling
on about how pieces of the banana peel were thrown
around, with the death of the human asking for the
banana listed down on there somewhere between the
effects on OSHA and meat inspectors.

> People would
> deliberately and accidentally use such machines to
> get up to no good,

If "no good" here means "destroying the human
species", then this is not an acceptable risk. It only
takes one existential catastrophe to put an end to the
potential of humankind forever.

> but they would have to contend with a large number
> of other
> human-directed AI's whose job is to monitor
> dangerous and erratic AI
> behaviours.

The idea is that, once you build the first AGI, it
will go through an intelligence explosion and become
powerful enough to prevent other AGIs from taking
control and destroying humanity. 

> Humans manage for the most part to
> control the behaviour
> of other humans so why wouldn't AI's do equally well
> controlling the
> behaviour of other AI's?

Because humans are all on roughly the same level- they
have roughly the same brains, instincts, and body
structure. AGIs do not. One AGI could be more
different from another than you are from a lemur.

> 
> 
> -- 
> Stathis Papaioannou
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


       
____________________________________________________________________________________
Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=8893904-50cbb7

Reply via email to