I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?

--- On Wed, 9/3/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> From: Vladimir Nesov <[EMAIL PROTECTED]>
> Subject: Re: [agi] What is Friendly AI?
> To: agi@v2.listbox.com
> Date: Wednesday, September 3, 2008, 6:11 PM
> On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
> <[EMAIL PROTECTED]> wrote:
> >
> > I'm asserting that if you had an FAI in the sense
> you've described, it wouldn't
> > be possible in principle to distinguish it with 100%
> confidence from a rogue AI.
> > There's no "Turing Test for
> Friendliness".
> >
> 
> You design it to be Friendly, you don't generate an
> arbitrary AI and
> then test it. The latter, if not outright fatal, might
> indeed prove
> impossible as you suggest, which is why there is little to
> be gained
> from AI-boxes.
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to