I do suspect that superhumanly intelligent AI~s are intrinsically
uncontrollable by humans...

Ben G

On 12/25/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
On 12/22/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> I don't consider there is any "correct" language for stuff like this,
> but I believe my use of "supergoal" is more standard than yours...

It's just that, on this list in particular, when people speak of
"supergoals", they're usually asking whether one can keep an AI safe &
friendly by ensuring that its supergoals can't change.  For that sort
of analysis, the definition of supergoal that I was using is, I think,
more appropriate.  Or, at least, if you use the definition you're
using, you would probably conclude that AIs are inherently dangerous
and uncontrollable.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to