On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote:

It would be vastly easier for a properly programmed
AGI to decipher what we meant that it would be for
humans. The question is- why would the AGI want to
decipher what human mean, as opposed to the other
2^1,000,000,000 things it could be doing? It would be
vastly easier for me to build a cheesecake than it
would be for a chimp, however, this does not mean I
spend my day running a cheesecake factory. Realize
that, for a random AGI, deciphering what humans mean
is not a different kind of problem than factoring a
large number. Why even bother?

If it's possible to design an AI that can think at all and maintain
coherent goals over time, then why would you design it to choose
random goals? Surely the sensible thing is to design it to do what I
say and what I mean, to inform me of the consequences of its actions
as far as it can predict them, to be truthful, and so on. Maybe it
would still kill us all through some oversight (on our part and on the
part of the large numbers of other AI's all trying to do the same
thing, and keep an eye on each other), but then if a small number of
key people go psychotic simultaneously, they could also kill us all
with nuclear weapons. There are no absolute guarantees, but I don't
see why an AI with power should act more erratically than a human with
power.



--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to