On Feb 18, 2008 7:41 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> In other words you cannot have your cake and eat it too:  you cannot
> assume that this hypothetical AGI is (a) completely able to build its
> own understanding of the world, right up to the human level and beyond,
> while also being (b) driven by an extremely dumb motivation system that
> makes the AGI seek only a couple of simple goals.
>

Great summary, Richard. You should probably write it up. This position
that there is a very difficult problem of friendly AGI and much
simpler problem of idiotic AGI that still somehow posits a threat is
too easily accepted.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to