On 03/06/2008 08:32 AM,, Matt Mahoney wrote:
--- Mark Waser <[EMAIL PROTECTED]> wrote:
And thus, we get back to a specific answer to jk's second question.  "*US*"
should be assumed to apply to any sufficiently intelligent goal-driven
intelligence.  We don't need to define "*us*" because I DECLARE that it
should be assumed to include current day humanity and all of our potential
descendants (specifically *including* our Friendly AIs and any/all other
"mind children" and even hybrids).  If we discover alien intelligences, it
should apply to them as well.

... snip ...

- Killing a dog to save a human life is friendly because a human is more
intelligent than a dog.

... snip ...

Mark said that the objects of concern for the AI are "any sufficiently intelligent goal-driven intelligence[s]", but did not say if or how different levels of intelligence would be weighted differently by the AI. So it doesn't yet seem to imply that killing a certain number of dogs to save a human is friendly.

Mark, how do you intend to handle the friendliness obligations of the AI towards vastly different levels of intelligence (above the threshold, of course)?

joseph


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to