On Fri, Mar 7, 2008 at 5:24 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>  The core of my thesis is that the
>  particular Friendliness that I/we are trying to reach is an "attractor" --
>  which means that if the dominant structure starts to turn unfriendly, it is
>  actually a self-correcting situation.
>

This sounds like magic thinking, sweeping the problem under the rug of
'attractor' word. Anyway, even if this trick somehow works, it doesn't
actually address the problem of friendly AI. The problem with
unfriendly AI is not that it turns "selfish", but that it doesn't get
what we want from it or can't foresee consequences of its actions in
sufficient detail.

If you already have a system (in the lab) that is smart enough to
support your code of friendliness and not crash old humanity by
oversight by the year 2500, you should be able to make it produce
another system that works with unfriendly humanity, doesn't have its
own agenda, and so on.

P.S. I'm just starting to fundamentally revise my attitude to the
problem of friendliness, see my post "Understanding the problem of
friendliness" on SL4.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to