On Friday 07 March 2008 05:13:17 pm, Matt Mahoney wrote:
> How does an agent know if another agent is Friendly or not, especially if 
the
> other agent is more intelligent?

See Beyond AI, p331-2. What's needed is a form of open source and provable 
reliability guarantees. This would have to be worked out in great detail by 
the AIs themselves, but it would clearly be a very valuable thing for two AIs 
to be able to exchange trustability guarantees as part of a contract, so if 
we can't figure out how to, they probably will.

As for people, it seems likely that a very valuable thing (for the individual 
and for the rest of society both) for each person to have a "Jeeves" AI which 
helps him navigate the complexities of AI society (parsing guarantees, for 
example) and also guarantees the human's behavior (and acts to enforce the 
guarantees if necessary).

Josh

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to