Aleksei Riikonen wrote:
On 10/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
As I said before, it is not that evaulation of the CEV is somehow
impossible, it is the idea that *doing* *so* is the solution to the
friendliness problem.

No one has presented such an idea, you are unable to shake off your
misunderstandings of what the CEV page says. In that page's
formulation, the difficult part of the problem is located in part 3 of
the enumeration presented at the beginning of the page, and CEV has
nothing to do with solving that (and it may be impossible to solve
with Eliezer's preferred approach, but that is a completely separate
discussion from CEV).

CEV is just about thinking about "if we had a safe very intelligent
AI/VPOP, what *exactly* would we want to do with it?" CEV is no more a
circular answer to this question, than e.g. "Let's just make the AI so
that it obeys the every word of [lead programmer]."


Then I stand corrected and withdraw my comment about the use for friendliness. My mistake.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57897178-df00d8

Reply via email to