On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> Present questions to humans, construct models of the answers received. >> Nothing infeasible about this. > > Yes - that would be feasible for an advanced AI but I don't think that is > how CEV is envisioned.
Well, it is. (Except that the "asking questions" part could conceivable be replaced by more direct study of the human brains and what they want, than only studying the articulations of humans.) > In addition it would not guarantee friendliness as, say the views of a > religious > fundamentalist would be considered just as well as mine or that of the Dalai > Lama. How would these very different views be reconciled? I don't remember much of what the CEV page said about this (such a long page, and long time since I read it, but it said a lot about this), but obviously you could e.g. have people with irreconcivably different views live at different locations so they don't bother each other. (We could even have several simulated initially-identical copies of the Earth, so people could pick between those.) Also, when all the fundamentalists/etc knew and understood a lot more (e.g. on the topic of the psychology of religions, and the various fallacies they engage in), had billions of years to think about stuff etc, I do think almost all would get a lot more sensible... -- Aleksei Riikonen - http://www.iki.fi/aleksei ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=57900407-7e62b2