--- On Thu, 6/12/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> But it doesn't work for full fledged AGI. Suppose you
> are a young man who's always been taught not to get yourself killed, and 
> not to kill people (as top 
> priorities). You are confronted with your country being invaded and faced 
> with the decision to join the defense with a high liklihood of both. 
> 
> If you have a fixed-priority utility function, you
> can't even THINK ABOUT the 
> choice. Your pre-choice function will always say
> "Nope, that's bad" and you'll be unable to change. (This
> effect is intended in all the RSI stability arguments.)

These are learned goals, not top level goals.  Humans have no top level goal to 
avoid death. The top level goals are to avoid pain, hunger, and the hundreds of 
other things that reduce the likelihood of passing on your genes. These goals 
exist in animals and children that do not know about death.

Learned goals such as respect for human life can easily be unlearned as 
demonstrated by controlled experiments as well as many anecdotes of wartime 
atrocities committed by people who were not always evil.
http://en.wikipedia.org/wiki/Milgram_experiment
http://en.wikipedia.org/wiki/Stanford_prison_experiment

Top level goals are fixed by your DNA.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to