Re: [singularity] Re: CEV

2007-10-27 Thread Benjamin Goertzel
> > > In other words: if we ever get to a point where the model advocated by > Stefan Pernar could be implemented, we are at a point where > implementing CEV is also possible! This is not necessarily true ... IMO this statement evolves an excessive confidence regarding the relative capabilities

[singularity] Re: CEV

2007-10-27 Thread Aleksei Riikonen
On 10/27/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: > On Oct 27, 2007, at 1:55 AM, Aleksei Riikonen wrote: > >> You seem to have a need to personally give a final answer to "What >> is 'good'?" -- an answer to what moral rules the universe should be >> governed by. If you think that your answe

Re: [singularity] Re: CEV

2007-10-27 Thread Benjamin Goertzel
Samantha, I tend to agree with you that CEV is not a currently directly useful train of thought... But there is the possibility that -- like many other not-necessarily-realistic thought experiments -- it stimulates thinking in different directions that a stricter adherence-to-realism might now.

Re: [singularity] Re: CEV

2007-10-27 Thread Samantha  Atkins
On Oct 27, 2007, at 1:55 AM, Aleksei Riikonen wrote: On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: You seem to have a need to personally give a final answer to "What is 'good'?" -- an answer to what moral rules the universe s

[singularity] Re: CEV

2007-10-27 Thread Aleksei Riikonen
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> You seem to have a need to personally give a final answer to "What is >> 'good'?" -- an answer to what moral rules the universe should be >> governed by. If you think that your answ

Re: [singularity] Re: CEV

2007-10-27 Thread Stefan Pernar
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > You seem to have a need to personally give a final answer to "What is > 'good'?" -- an answer to what moral rules the universe should be > governed by. If you think that your answer is better than what the > "surveying" process that CEV i

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> This is getting ridiculous. As repeatedly stated in this discussion, >> there is nothing circular about a sequence of steps of the following >> sort: >> >> (1) A superintelligent AI

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > > >> So is there actually anything in CEV that you object to? > > > > Oh sure - all my previous objections: circularity

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> So is there actually anything in CEV that you object to? >> >> If we use your terminology, in the CEV model 'goodness' *does* emerge >> "outside" of the dynamic, since 'goodness' is

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > So is there actually anything in CEV that you object to? > > If we use your terminology, in the CEV model 'goodness' *does* emerge > "outside" of the dynamic, since 'goodness' is found in the answers the > humans give. > Oh sure - all my

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > An AI implementing CEV doesn't question, is the thing that humans express that they ultimately want, 'good' or not. If it is what the humans really want, then it is done

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > > >> An AI implementing CEV doesn't question, is the thing that humans > >> express that they ultimately want, 'good' o

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> An AI implementing CEV doesn't question, is the thing that humans >> express that they ultimately want, 'good' or not. If it is what the >> humans really want, then it is done. No t

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > No. For me to think that "what I would want to be" is 'good', I do not > have to think that I am 'good' right now. > > An AI implementing CEV doesn't question, is the thing that humans > express that they ultimately want, 'good' or not. I

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/26/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: >> >> X0 = me >> Y0 = what X0 thinks is good for the world >> X1 = what X0 wants to be >> Y1 = what X1 would think is good for the world >> X2 = what X1 would want to be >> Y2 = what X2

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> Present questions to humans, construct models of the answers received. >> Nothing infeasible about this. > > Yes - that would be feasible for an advanced AI but I don't think that i

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Aleksei Riikonen wrote: On 10/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: As I said before, it is not that evaulation of the CEV is somehow impossible, it is the idea that *doing* *so* is the solution to the friendliness problem. No one has presented such an idea, you are unable to sha

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > As I said before, it is not that evaulation of the CEV is somehow > impossible, it is the idea that *doing* *so* is the solution to the > friendliness problem. No one has presented such an idea, you are unable to shake off your misunderst

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Benjamin Goertzel wrote: So a VPOP is defined to be a safe AGI. And its purpose is to solve the problem of building the first safe AGI... No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal of carrying out a certain kind of extrapolation What you are doubting,

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Stathis Papaioannou wrote: On 26/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote: If you build an AGI, and it sets out to discover the convergent desires (the CEV) of all humanity, it will be doing this because it has the goal of using this CEV as the basis for the "friendly" motivations t

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
> > > So a VPOP is defined to be a safe AGI. And its purpose is to solve the > problem of building the first safe AGI... > No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal of carrying out a certain kind of extrapolation What you are doubting, perhaps, is that it is pos

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Benjamin Goertzel wrote: Loosemore wrote: But WHY would it be collecting the CEV of humanity in the first phase of the operation? What would motivate it to do such a thing? What exactly is it in the AGI's design that makes it feel compelled to be friendly enough toward hum

Re: [singularity] Re: CEV

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote: > If you build an AGI, and it sets out to discover the convergent desires > (the CEV) of all humanity, it will be doing this because it has the goal > of using this CEV as the basis for the "friendly" motivations that will > henceforth gu

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
Loosemore wrote: But WHY would it be collecting the CEV of humanity in the first phase of > the operation? What would motivate it to do such a thing? What exactly > is it in the AGI's design that makes it feel compelled to be friendly > enough toward humanity that it would set out to assess the

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Stefan Pernar wrote: On 10/26/07, *Richard Loosemore* <[EMAIL PROTECTED] > wrote: Stefan can correct me if I am wrong here, but I think that both yourself and Aleksei have misunderstood the sense in which he is pointing to a circularity. If you buil

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > > Stefan can correct me if I am wrong here, but I think that both yourself > and Aleksei have misunderstood the sense in which he is pointing to a > circularity. > > If you build an AGI, and it sets out to discover the convergent desires >

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > On 10/26/07, Stefan Pernar < [EMAIL PROTECTED]> wrote: > > > > > > > My one sentence summary of CEV is: "What would a better me/humanity > > want?" > > > Is that in line with your understanding? > > > No... > > I'm not sure I fully grok

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Benjamin Goertzel wrote: On 10/26/07, Stefan Pernar < [EMAIL PROTECTED] > wrote: > > My one sentence summary of CEV is: "What would a better me/humanity want?" > Is that in line with your understanding? No... I'm not sure I fully grok Eli

Re: [singularity] Re: CEV

2007-10-26 Thread BillK
On 10/26/07, Benjamin Goertzel wrote: > > My understanding is that it's more like this (taking some liberties) > > X0 = me > Y0 = what X0 thinks is good for the world > X1 = what X0 wants to be > Y1 = what X1 would think is good for the world > X2 = what X1 would want to be > Y2 = what X2 would thi

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
> On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > > > My one sentence summary of CEV is: "What would a better me/humanity > want?" > > Is that in line with your understanding? No... I'm not sure I fully grok Eliezer's intentions/ideas, but I will summarize here the current idea I have

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > In summary one would need to define good first in order to set the CEV > > dynamic in motion, otherwise the AI would not be able to model a better > > me/humanity. > > Present ques

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > > My one sentence summary of CEV is: "What would a better me/humanity want?" > Is that in line with your understanding? For an AI to model a 'better' > me/humanity it would have to know what 'good' is - a definition of good - > and that is the

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > > What you get once the dynamic has run its course, is whatever > convergent answers were obtained on the topic of what humans would > want. You do not need these answers to set the dynamic in motion. We > already know the part that we don'

[singularity] Re: CEV

2007-10-25 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > >> I'm quite convinced that what I would want, for example, is not >> circular. And I find it rather improbable that many of you other >> humans would end up in a loop either. So CEV i

Re: [singularity] Re: CEV

2007-10-25 Thread Stefan Pernar
On 10/26/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: > You misunderstand. For CEV to be circular, it would be required that > when extrapolating the wishes of humans, one would end up in a loop. The reason why I believe CEV to be circular is because it does not define friendliness prior to h

[singularity] Re: CEV

2007-10-25 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: > CEV is a concept that avoids answering the question of what friendliness is > by letting an advanced AI figure out what good might be. Doing so makes > endowing an AI implementation with friendliness not feasible. CEV is > circular. You misun