On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
>
> On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
> >
> >> An AI implementing CEV doesn't question, is the thing that humans
> >> express that they ultimately want, 'good' or not. If it is what the
> >> humans really want, then it is done. No thinking about whether it is
> >> really 'good' (except the thinking done by the humans answering the
> >> questions, and the possible simulations/modifications/whatever of
> >> those humans -- and they indeed think that it *is* 'good').
> >
> > If that is how CEV is meant to work than I object to CEV and reject it.


If you really had figured out a smart answer to this question, don't
> you think the vastly smarter and more knowledgeable humans of the
> future would agree with you (they would check out what is already
> written on the subject)? And so CEV would automatically converge on
> whatever it is that you have figured out...
>

This would require 'goodness' to emerge outside of the CEV dynamic not as a
result thereof. I agree with you.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58062261-17b83a

Reply via email to