--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 02/07/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
> 
> > > Are you suggesting that the AI won't be smart
> enough
> > > to understand
> > > what people mean when they ask for a banana?
> >
> > It's not a question of intelligence- it's a
> question
> > of selecting a human-friendly target in a huge
> space
> > of possibilities. Why should the AGI care what you
> > "meant"? You asked for a banana, and you got a
> banana,
> > so what's the problem?
> 
> As proof of concept, we have humans who understand
> what other humans
> mean when they say things. This involves knowledge
> of language and an
> intuitive understanding of human psychology,
> allowing you to pick a
> few meanings out of the vast space of possible
> meanings. You and I do
> this very easily, many times a day. Why should it be
> more difficult
> for an AI, especially a superintelligent AGI, to
> achieve the same
> level of undestanding?

It would be vastly easier for a properly programmed
AGI to decipher what we meant that it would be for
humans. The question is- why would the AGI want to
decipher what human mean, as opposed to the other
2^1,000,000,000 things it could be doing? It would be
vastly easier for me to build a cheesecake than it
would be for a chimp, however, this does not mean I
spend my day running a cheesecake factory. Realize
that, for a random AGI, deciphering what humans mean
is not a different kind of problem than factoring a
large number. Why even bother?

> > What? Why would an AGI whose goals are chosen
> > completely at random even bother with
> > self-replication? Self-replication is a rather
> > unlikely thing for an AGI to do; the series of
> actions
> > required to self-replicate are complex enough to
> make
> > it very unlikely that an AGI will do them just by
> > sheer chance.
> 
> I agree, but those who think AI's will evolve to
> destroy us consider
> the possibility that a rapacious, malevolent AI will
> somehow arise
> (whether by accident or design) and have a
> competitive advantage over
> the tame ones.

The idea is that the first AGI to be created will go
through an intelligence explosion and impose its
morality on the world, and that the vast majority of
possible AGIs to do this will be amoral. Thus, if we
just pick an AGI out of a hat without careful design,
it's very, very likely to kill us all.

> 
> -- 
> Stathis Papaioannou
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


       
____________________________________________________________________________________
Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to