Matt Mahoney wrote:
--- Tom McCabe <[EMAIL PROTECTED]> wrote:
[snip]
Any future Friendly AGI isn't going to obey us exactly
in every respect, because it's *more moral* than we
are. Should an FAI obey a request to blow up the
world?

That is what worries me.  I think it is easier to program an AGI for blind
obedience (its top level goal is to serve humans) than to program it to make
moral judgments in the best interest of humans, without specifying what that
means.  I gave this example on Digg.  Suppose the AGI (being smarter than us)
figures out that consciousness and free will are illusions of our biologically
programmed brains, and that there is really no difference between a human
brain and a simulation of a brain on a computer.  We may or may not have the
technology for uploading, but suppose the AGI decides (for reasons we don't
understand) that it doesn't need it.  Therefore it is in our best interest (or
irrelevant) to destroy the human race.

We cannot rule out this possibility because a lesser intelligence cannot
predict what a greater intelligence will do.  If you measure intelligence
using algorithmic complexity, then Legg proved this formally. http://www.vetta.org/documents/IDSIA-12-06-1.pdf

This is complete nonsense. Sorry to be so extreme, but nothing less will do: this is a serious topic, but every time it comes up the same broken ideas get repeated ad nauseam.

Legg's paper is of no relevance to the argument whatsoever, because it first redefines "intelligence" as something else, without giving any justification for th redefinition, then proves theorems about the redefined meaning. So it supports nothing in any discussion of the behavior of intelligent systems. I have discussed this topic on a number of occasions.

Second, the above discussion of how an AGI would behave makes the most naive assumptions possible about the structure of a motivational system. If you want to make those assumptions, fine, but be aware that you are talking about a mickey-mouse kind of AGI that would never work. I have discussed the reasons why in some posts to this list beginning on October 25th 2006.

Richard Loosmore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to