> 
> There might even be a benefit to trying to develop an ethical system for 
> the earliest possible AGIs - and that is that it forces everyone to strip 
> the concept of an ethical system down to its absolute basics so that it 
> can be made part of a not very intelligent system.  That will probably be 
> helpful in getting the clarity we need for any robust ethical system 
> (provided we also think about the upgrade path issues and any 
> evolutionary deadends we might need to avoid).
> 
> Cheers, Philip

I'm sure this idea is nothing new to this group, but I'll mention it anyway out of 
curiosity.

A simple and implementable means of evaluating and training the ethics of an early AGI 
(one existing in a limited FileWorld type environment), would engage the AGI in 
variants of prisoner's dilemna with either humans or a copy of itself.   The payoff 
matrix(CC, CD, DD) could be varied to provide a number of different ethical 
situtations.  

Another idea is that the prisoner's dilemna could then be internalized, and the AGI 
could play the game between internal actors, with the Self evaluating their actions 
and outcomes.


-Brad





-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to