Ben said,

>When the system is smart enough, it will learn to outsmart the posited
>Control Code, and the ethics-monitor AGI....

This isn't apparent at all, given that the Control Code could be pervasively
imbedded and keyed to things beyond the AGI's control.  The idea is to limit
the AGI and control its progress as we wish.  I just don't see the risk that
the AGI will suddenly become so intelligent that it is able to "jump out of
the box" in a near-supernatural fashion, as some seem to fear.

Someone once said that a cave can trap and control a man, even though the
cave is dumb rock.  We are considerably more intelligent than granite, so I
would not hesitate to believe that we control an AGI that we create.

Of course, the details of a sophisticated "kill switch" would depend on the
architecture of system, and be beyond the scope of this casual conversation.
But to dismiss it out of hand as conceptually ineffectual is rather
puzzling.

Kevin Copple

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to