--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 02/07/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
> 
> > It would be vastly easier for a properly
> programmed
> > AGI to decipher what we meant that it would be for
> > humans. The question is- why would the AGI want to
> > decipher what human mean, as opposed to the other
> > 2^1,000,000,000 things it could be doing? It would
> be
> > vastly easier for me to build a cheesecake than it
> > would be for a chimp, however, this does not mean
> I
> > spend my day running a cheesecake factory. Realize
> > that, for a random AGI, deciphering what humans
> mean
> > is not a different kind of problem than factoring
> a
> > large number. Why even bother?
> 
> If it's possible to design an AI that can think at
> all and maintain
> coherent goals over time, then why would you design
> it to choose
> random goals?

The goals will be designed by humans, but the huge
prior probability against the goals leading to an AGI
that does what people want means that it takes a heck
of a lot of design effort to accomplish that.

> Surely the sensible thing is to design
> it to do what I
> say and what I mean, to inform me of the
> consequences of its actions
> as far as it can predict them, to be truthful, and
> so on.

Yes. This is really, really tricky- that's the point I
was trying to make.

> Maybe it
> would still kill us all through some oversight (on
> our part and on the
> part of the large numbers of other AI's all trying
> to do the same
> thing, and keep an eye on each other),

Oops. There goes the human race. That's why it's
really, really important not to make such oversights,
and why you have to be very careful about the design.

> but then if a
> small number of
> key people go psychotic simultaneously,

A human comes with a huge number of evolutionary
instincts built in, which work in concert to keep
humans from being viewed by other humans as psychotic.
An AGI doesn't come with these instincts handily built
in.

> they could
> also kill us all
> with nuclear weapons.

Nuclear weapons aren't anywhere near powerful enough
to kill us all.

> There are no absolute
> guarantees, but I don't
> see why an AI with power should act more erratically
> than a human with
> power.

Because the space of possible AGIs is a lot larger
than the space of possible humans. AGIs, given power,
can do a lot more things with it than humans can, and
because the vast majority of the ways to use power are
not Friendly, AGIs are much less likely to use that
power in ways that are not Friendly than humans are.

> 
> 
> -- 
> Stathis Papaioannou
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


       
____________________________________________________________________________________
Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to