On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > You misunderstand. For CEV to be circular, it would be required that > when extrapolating the wishes of humans, one would end up in a loop.
The reason why I believe CEV to be circular is because it does not define friendliness prior to having an AGI. I'm quite convinced that what I would want, for example, is not > circular. And I find it rather improbable that many of you other > humans would end up in a loop either. So CEV is not circular either, > since it is about finding out / extrapolating what humans would want. If you need a friendly AGI to find out what humans would want then you can not create a friendly AGI because you would need a friendly AI to create a friendly AI = circular. > See the following core sentence for example: > > > > "...if we knew more, thought faster, were more the people we wished we > were, > > had grown up farther together; where the extrapolation converges rather > than > > diverges, where our wishes cohere rather than interfere; extrapolated as > we > > wish that extrapolated, interpreted as we wish that interpreted..." > > > > Simplified: "If we were better people we were better people." > > This "simplification" you wrote does not have much anything to do with > that fragment of a sentence that you claim to be simplifying here. I don't see why not. Let's take it step by step: If we knew more = if we were better people If we thought faster = if we were better people If we were more the people we wished we were = if we were better people If we had grown up farther together = if we were better people And so on... What am I missing? > not adding value as key concepts such as 'friendliness', 'good', > > 'better' and 'benevolence' remain undefined. > > Friendliness is defined in the CEV model, by stating that the CEV > dynamic will be the initial content for it (whereas these other words > you mention are not independent key concepts in the CEV model). That > initial dynamic will then extrapolate / find out what humans would > want, thereby converging on a stable definition of Friendliness. I see that but how would an AGI now what is 'better' or 'what we wanted if' in the absence of us telling it? What humans think is 'good' is what the CEV dynamic would find out. We > don't have to define it when we set the dynamic in motion. "The CEV dynamic would find out what humans think is good" is something else then saying "The CEV dynamic would find out what humans would think is good if they were better people". The first one is possible but not friendly in the CEV sense and the second one is circular. I now put up my paper condensing my thoughts on friendliness on www.jame5.com - would be great to hear your thoughts on it. Kind regards, Stefan -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > ----- > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& > -- Stefan Pernar 3-E-101 Silver Maple Garden #6 Cai Hong Road, Da Shan Zi Chao Yang District 100015 Beijing P.R. CHINA Mobil: +86 1391 009 1931 Skype: Stefan.Pernar ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=57763723-85e8cc