On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
>
>> So is there actually anything in CEV that you object to?
>>
>> If we use your terminology, in the CEV model 'goodness' *does* emerge
>> "outside" of the dynamic, since 'goodness' is found in the answers the
>> humans give.
>
> Oh sure - all my previous objections: circularity, 'goodness' not inherent
> in CEV still stand.

This is getting ridiculous. As repeatedly stated in this discussion,
there is nothing circular about a sequence of steps of the following
sort:

(1) A superintelligent AI such that it doesn't start killing humans or
other nasty things, is created.
(2) The one thing that the AI *does* do, is start asking humans what
they want to be done. As a part of this asking process, humans (and/or
simulated copies of them) are made smarter and more knowledgeable, and
are changed in other ways (if any) that the humans want to be changed
in.
(3) Eventually the process of humans (and/or their copies) getting
significantly smarter, more knowledgeable etc stops, and the answers
the humans give to questions such as "what should be done?" and "what
is 'good'?" become stable.
(4) Then the AI just does what the humans want to be done. Happy end,
no circular loop.

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58072808-9471f2

Reply via email to