On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> An AI implementing CEV doesn't question, is the thing that humans >> express that they ultimately want, 'good' or not. If it is what the >> humans really want, then it is done. No thinking about whether it is >> really 'good' (except the thinking done by the humans answering the >> questions, and the possible simulations/modifications/whatever of >> those humans -- and they indeed think that it *is* 'good'). > > If that is how CEV is meant to work than I object to CEV and reject it.
No need to ask vastly smarter and more knowledgeable humans "what is 'good'?", since you have already figured out a be-all-end-all answer, huh? If you really had figured out a smart answer to this question, don't you think the vastly smarter and more knowledgeable humans of the future would agree with you (they would check out what is already written on the subject)? And so CEV would automatically converge on whatever it is that you have figured out... -- Aleksei Riikonen - http://www.iki.fi/aleksei ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=58059833-f7a919