Abram,

Let's say that the builders want to keep things safe and simple for
starters, and concentrate on the best possible AGI theorem-prover, rather
than some complex do-gooding machine.

The best way for the machine to achieve its assigned goal is to improve not
only its own software but also its hardware, and so, by hook or by crook,
with trickiness and wile (remember, this is an Artificial _General_
Intelligence, not just a glorified Deep Blue; if necessary, it
improves its own wiliness), it converts
Planet Earth into silicon chips (or actually, into  better-than-silicon
hardware that it invents if necessary; call it "computronium").

Of course, the  AGI builder would put in safeguards to keep this from
happening, but when you start trying to figure out what safeguards would
work on something which is _smarter_than_you_, you find yourself deep into
full-fledged Friendliness research before you know it.

(The above is just my modest effort to summarize Yudkowsky's writings,
which express all this better than I do.)

Joshua



2007/5/27, Abram Demski <[EMAIL PROTECTED]>:

Joshua Fox,could you give an example scenario of how an AGI theorem-prover
would wipe out humanity?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to