On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
>
> My one sentence summary of CEV is: "What would a better me/humanity want?"
> Is that in line with your understanding? For an AI to model a 'better'
> me/humanity it would have to know what 'good' is - a definition of good -
> and that is the circularity that I criticize as 'good' equal to 'what a
> better me/humanity would want' constitutes the end result of the CEV dynamic
> - not a starting value.

The AI can just ask all the billions of humans what they think is
'good', and see what and how much the various answers have in common.

The AI would perhaps run a gazillion simulations of humans in
different situations being asked "what is 'good'?", or whatever. And
it could make to its simulated human brains the various changes
specified in the CEV model, and find out what humans would answer if
they knew more, had more time to think, were more the people they
wished to be (yes, the AI can ask people what they would like to
change about themselves) etc.

No definition-cast-in-stone is needed. Just presenting questions to
(preferably simulated, so the process is a lot faster) humans and
modified (simulated) humans is enough.

> In summary one would need to define good first in order to set the CEV
> dynamic in motion, otherwise the AI would not be able to model a better
> me/humanity.

Present questions to humans, construct models of the answers received.
Nothing infeasible about this.

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57773332-e56396

Reply via email to