On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
>
>> I'm quite convinced that what I would want, for example, is not
>> circular. And I find it rather improbable that many of you other
>> humans would end up in a loop either. So CEV is not circular either,
>> since it is about finding out / extrapolating what humans would want.
>
> If you need a friendly AGI to find out what humans would want then you can
> not create a friendly AGI because you would need a friendly AI to create a
> friendly AI = circular.

I don't know what you mean by "friendly AI" here, but it obviously
isn't what is meant by this term in the CEV model, since in CEV you
*don't* need whatever-the-dynamic-converges-to when you set the
dynamic in motion.

Before you set the dynamic in motion, you just need an AI that is

(1) smart enough to do the intellectual work required (this is
independent of friendliness, just the ability to study humans etc)
(2) such that it doesn't do anything very nasty, like kill humans


What you get once the dynamic has run its course, is whatever
convergent answers were obtained on the topic of what humans would
want. You do not need these answers to set the dynamic in motion. We
already know the part that we don't want the AGI to kill us!

If by "friendly AGI" you refer to a system that has these answers,
then you *don't* need friendly AGI to start the process.

If by "friendly AGI" you refer to those "pieces of friendliness" that
you need to set the dynamic in motion, which are that the AGI must not
kill us, then you *don't* need the answers the dynamic produces in
order to have "friendly AGI".


It is a challenge, how to create a very smart AGI that we can be very
very sure doesn't kill us, or do other nasty things. This part of the
Friendly AI problem needs to be solved prior to trying to set the
convergent dynamic in motion, just as is stated on the CEV page. Look
at the beginning of the page, it enumerates one-two-three *separate*
challenges that have to be solved:

http://www.singinst.org/upload/CEV.html

Number 2 is what the convergent dynamic finds out, and is *not* needed
at any prior point. Number 3 is the difficult part of Friendly AI that
is needed prior to setting the dynamic in motion. CEV has nothing to
do with solving number 3.


(I'm optimistic that this is sufficient to clear up all of the
misunderstanding, so I'm not explicitly commenting on the other parts
of your message here.)

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57765983-b48e57

Reply via email to