--- Nathan Barna <[EMAIL PROTECTED]> wrote:
> On 9/12/06, Stephen Reed <[EMAIL PROTECTED]>
> wrote:
> > such.  How do imagine a safe upload of all
> humanity
> > would unfold?
> 
> Other good exploitive persuasion probably'll come
> along, but it could
> be conventional that each person uploads into her
> infinitesimal
> polymorphic hypercomputer and then, afterward, when
> not being friendly
> becomes laughably impossible, externally
> communicates according to
> volition.
Hm.  For the sake of your point, I suppose that all
humans upload according to your scheme and no bodies
remain to be harmed by the AIs.  But I wonder how safe
separate AI's would be, prior to their voluntary
federation?  Would it be possible that infinitesimal
polymorphic hypercomputers are in fact obscure enough
to be unharmed by a malicious AI born of the same
technology?  Do you imagine a safe exodus from Planet
Earth to obscurity and a subsequent voluntary return
from the diaspora?

The point of my questions is that I believe that it is
ultimately simpler and safer to devise an AI prior to
uploading that behaves in a friendly fashion,
following human laws and ethics, even if it is smarter
than its human developers.  

The US military will have to address this issue soon
with upcoming autonomous weapon systems.  Presently
their safeguards consist of keeping a human in the
control loop.  But how is the behavior of a fully
autonomous robot controlled via its design?
-Steve  

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to