On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
>
> What you get once the dynamic has run its course, is whatever
> convergent answers were obtained on the topic of what humans would
> want. You do not need these answers to set the dynamic in motion. We
> already know the part that we don't want the AGI to kill us!


My one sentence summary of CEV is: "What would a better me/humanity want?"
Is that in line with your understanding? For an AI to model a 'better'
me/humanity it would have to know what 'good' is - a definition of good -
and that is the circularity that I criticize as 'good' equal to 'what a
better me/humanity would want' constitutes the end result of the CEV dynamic
- not a starting value.

If by "friendly AGI" you refer to a system that has these answers,
> then you *don't* need friendly AGI to start the process.


By friendly AI I mean an AI that does good.

If by "friendly AGI" you refer to those "pieces of friendliness" that
> you need to set the dynamic in motion, which are that the AGI must not
> kill us, then you *don't* need the answers the dynamic produces in
> order to have "friendly AGI".


In summary one would need to define good first in order to set the CEV
dynamic in motion, otherwise the AI would not be able to model a better
me/humanity.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57771321-c9f453

Reply via email to