On Thu, Aug 28, 2008 at 8:29 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>>
>> AGI doesn't do anything with the question, you do. You
>> answer the
>> question by implementing Friendly AI. FAI is the answer to
>> the
>> question.
>
> The question is: how could one specify Friendliness in such a way that an
> AI will be guaranteed-Friendly? Is your answer to that really just "you build
> a Friendly AI"?  Why do I feel like a dog chasing my own tail?

You start with "what is right?" and end with Friendly AI, you don't
start with "Friendly AI" and close the circular argument. This doesn't
answer the question, but it defines Friendly AI and thus "Friendly AI"
(in terms of "right").


> I've been saying that Friendliness is impossible to implement because
> 1) it's a moving target (as in, changes through time),

All things change through time, which doesn't make them cease to exist.


> since 2) its definition
> is dependent on context (situational context, cultural context, etc).  In 
> other
> words, Friendliness is not something that can be hardwired. It can't be
> formalized, coded, designed, implemented, or proved. It is an invention of
> the collective psychology of humankind, and every bit as fuzzy as that
> sounds. At best, it can be approximated.

Definition is part of the context. Your actions depend on the context,
are determined by context, determine the outcome. You can't use it as
a generally valid argument. If in situation A, pressing button 1 is
right thing to do, and in situation B, pressing button 2 is right
thing to do, does it make the procedure for choosing the right button
to press fuzzy, undefinable and impossible to implement? How do you
know when to press the button? Every decision needs to come from
somewhere, there are no causal miracles. Maybe it complicates the
procedure a little, making the decision procedure conditional, "if(A)
press 1, else press 1", or maybe it complicates it much more, but it
doesn't make the challenge ill-defined.


>> > If you can't guarantee Friendliness, then
>> self-modifying approaches to
>> > AGI should just be abandoned. Do we agree on that?
>>
>> More or less, but keeping in mind that
>> "guarantee" doesn't need to be
>> a formal proof of absolute certainty. If you can't show
>> that a design
>> implements Friendliness, you shouldn't implement it.
>
> What does guarantee mean if not absolute certainty?
>

There is no absolute certainty. (
http://www.overcomingbias.com/2008/01/infinite-certai.html ). When you
normally say "I guarantee that I'll deliver X", you don't mean to
imply that it's impossible for you do die in a car accident in the
meantime, you just can't provide and by extension don't care about
this kind of distinction. Yet you don't say that if you can't provide
a *mathematical proof* of you delivering X (including the mathematical
proof that there will be no fatal car accidents), you should abandon
any attempts to implement X and do Y instead, and just hope that X
will emerge from big enough chaotic computers or whatever.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to