[agi] What is Friendly AI?

2008-08-30 Thread Vladimir Nesov
On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
> --- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
>> You start with "what is right?" and end with
>> Friendly AI, you don't
>> start with "Friendly AI" and close the circular
>> argument. This doesn't
>> answer the question, but it defines Friendly AI and thus
>> "Friendly AI"
>> (in terms of "right").
>
> In your view, then, the AI never answers the question "What is right?".
> The question has already been answered in terms of the algorithmic process
> that determines its subgoals in terms of Friendliness.

There is a symbolic string "what is right?" and what it refers to, the
thing that we are trying to instantiate in the world. The whole
process of  answering the question is the meaning of life, it is what
we want to do for the rest of eternity (it is roughly a definition of
"right" rather than over-the-top extrapolation from it). It is an
immensely huge object, and we know very little about it, like we know
very little about the form of a Mandelbrot set from the formula that
defines it, even though it entirely unfolds from this little formula.
What's worse, we don't know how to safely establish the dynamics for
answering this question, we don't know the formula, we only know the
symbolic string, "formula", that we assign some fuzzy meaning to.

There is no final answer, and no formal question, so I use
question-answer pairs to describe the dynamics of the process, which
flows from question to answer, and the answer is the next question,
which then follows to the next answer, and so on.

With Friendly AI, the process begins with the question a human asks to
himself, "what is right?". From this question follows a technical
solution, initial dynamics of Friendly AI, that is a device to make a
next step, to initiate transferring the dynamics of "right" from human
into a more reliable and powerful form. In this sense, Friendly AI
answers the question of "right", being the next step in the process.
But initial FAI doesn't embody the whole dynamics, it only references
it in the humans and learns to gradually transfer it, to embody it.
Initial FAI doesn't contain the content of "right", only the structure
of absorb it from humans.

Of course, this is simplification, there are all kinds of
difficulties. For example, this whole endeavor needs to be safeguarded
against mistakes made along the way, including the mistakes made
before the idea of implementing FAI appeared, mistakes in everyday
design that went into FAI, mistakes in initial stages of training,
mistakes in moral decisions made about what "right" means. Initial
FAI, when it grows up sufficiently, needs to be able to look back and
see why it turned out to be the way it did, was it because it was
intended to have a property X, or was it because of some kind of
arbitrary coincidence, was property X intended for valid reasons, or
because programmer Z had a bad mood that morning, etc. Unfortunately,
there is no objective morality, so FAI needs to be made good enough
from the start to eventually be able to recognize what is valid and
what is not, reflectively looking back at its origin, with all the
depth of factual information and optimization power to run whatever
factual queries it needs.

I (vainly) hope this answered (at least some of the) other questions as well.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-08-31 Thread Steve Richfield
Vladimir,

At great risk of stepping in where angels fear to tread...

This is an IMPORTANT discussion which several others have attempted to
start, but for which there is an entrenched self-blinding minority
(majority?) here who fail to see its EXTREME value. I believe that answers
to some of these questions should guide AGI development but probably will
fail to do so, resulting in most of the future harm that AGIs may do. In
short, the danger is NOT in AGI itself, but in the willful ignorance of its
developers, which may also be your concern. "Protective" mechanisms to
restrict their thinking and action will only make things WORSE.

My own present opinion varies slightly from yours, in that I believe that
even if a (supposedly) FAI could be developed, that it would only become the
tool for our own self-destruction, given present illogical prejudices about
what is "right", even when it is in direct conflict with "best".

Continuing with comments...

On 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam <[EMAIL PROTECTED]>
> wrote:
> > --- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> >
> >> You start with "what is right?" and end with
> >> Friendly AI, you don't
> >> start with "Friendly AI" and close the circular
> >> argument. This doesn't
> >> answer the question, but it defines Friendly AI and thus
> >> "Friendly AI"
> >> (in terms of "right").
> >
> > In your view, then, the AI never answers the question "What is right?".
> > The question has already been answered in terms of the algorithmic
> process
> > that determines its subgoals in terms of Friendliness.


I might be interested in an AGI who was working for "best", but I would be
the first to swing a sledgehammer onto one that was working for "right".
Where "right" and "best" differ, SOMETHING is wrong and it must be
understood before causing great damage. Usually, it is "right" that is
wrong, but how do you convince a religious constituency that their God-given
book is wrong in SO many ways?

Curiosity: Aside from its content, the New Testament is arguably the worst
written religious book now in common use. For example, where most other
religious books present clear instruction, the New Testament is full of
"parables" that have many possible interpretations. The only way that anyone
could claim such a poorly written work to be "from God" is though ignorance
of other works. Amazingly, this has survived for ~2,000 years and continues
to be the prevailing standard for "right", flaws and all.

However, narrowly defined "right" as presently defined by subgoals, more
closely equals "emasculated".

There is a symbolic string "what is right?" and what it refers to, the
> thing that we are trying to instantiate in the world.


Different groups have different goals. For in-your-face outrageous example,
a major reason that most people now die during their second half-century is
because of our quaint social practice of pairing like-aged couples, thereby
removing all Darwinian pressure to evolve into longer lived individuals. If
we were to socially enforce pairing young and old individuals, we could
reverse this. Of course, there aren't as many old individuals as there are
young, so the "pairing" would have to include more young people. Of course,
all this flies TOTALLY in the face of all prevailing shitforbrains
religions, even though it was originally practiced by Abraham. Any
"friendly" AGI would work AGAINST extending lifespan in such ways because
its subgoals would prohibit it from working against the younger majority to
restrict their freedom of choice to live with the mates they prefer.

Of course, there are always the "best" advocates (me among them) whose AGI
would possibly be more like Colossus. BTW, has anyone here read the 2nd and
3rd books in the Colossus trilogy yet? They reverse some of the lessons of
the first book/movie that people often comment on here.

The whole
> process of  answering the question is the meaning of life, it is what
> we want to do for the rest of eternity (it is roughly a definition of
> "right" rather than over-the-top extrapolation from it).


IMHO, a primary reason for an AGI is to see past present human prejudices
and make better decisions, which greatly favors "best" over "right". Indeed,
this uses "best" to discover the errors in "right", whereas you would
(apparently attempt to) work the other way.

It is an
> immensely huge object, and we know very little about it, like we know
> very little about the form of a Mandelbrot set from the formula that
> defines it, even though it entirely unfolds from this little formula.
> What's worse, we don't know how to safely establish the dynamics for
> answering this question, we don't know the formula, we only know the
> symbolic string, "formula", that we assign some fuzzy meaning to.


What we DO have is a world full of different societies that have DIFFERENT
problems. We could easily learn from them how to

Re: [agi] What is Friendly AI?

2008-08-31 Thread Eric Burton
I totally agree with this guy. I don't want to be accused of going too
far myself but I think he's being too conservative.

On 8/31/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
> Vladimir,
>
> At great risk of stepping in where angels fear to tread...
>
> This is an IMPORTANT discussion which several others have attempted to
> start, but for which there is an entrenched self-blinding minority
> (majority?) here who fail to see its EXTREME value. I believe that answers
> to some of these questions should guide AGI development but probably will
> fail to do so, resulting in most of the future harm that AGIs may do. In
> short, the danger is NOT in AGI itself, but in the willful ignorance of its
> developers, which may also be your concern. "Protective" mechanisms to
> restrict their thinking and action will only make things WORSE.
>
> My own present opinion varies slightly from yours, in that I believe that
> even if a (supposedly) FAI could be developed, that it would only become the
> tool for our own self-destruction, given present illogical prejudices about
> what is "right", even when it is in direct conflict with "best".
>
> Continuing with comments...
>
> On 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>>
>> On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam <[EMAIL PROTECTED]>
>> wrote:
>> > --- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> >
>> >> You start with "what is right?" and end with
>> >> Friendly AI, you don't
>> >> start with "Friendly AI" and close the circular
>> >> argument. This doesn't
>> >> answer the question, but it defines Friendly AI and thus
>> >> "Friendly AI"
>> >> (in terms of "right").
>> >
>> > In your view, then, the AI never answers the question "What is right?".
>> > The question has already been answered in terms of the algorithmic
>> process
>> > that determines its subgoals in terms of Friendliness.
>
>
> I might be interested in an AGI who was working for "best", but I would be
> the first to swing a sledgehammer onto one that was working for "right".
> Where "right" and "best" differ, SOMETHING is wrong and it must be
> understood before causing great damage. Usually, it is "right" that is
> wrong, but how do you convince a religious constituency that their God-given
> book is wrong in SO many ways?
>
> Curiosity: Aside from its content, the New Testament is arguably the worst
> written religious book now in common use. For example, where most other
> religious books present clear instruction, the New Testament is full of
> "parables" that have many possible interpretations. The only way that anyone
> could claim such a poorly written work to be "from God" is though ignorance
> of other works. Amazingly, this has survived for ~2,000 years and continues
> to be the prevailing standard for "right", flaws and all.
>
> However, narrowly defined "right" as presently defined by subgoals, more
> closely equals "emasculated".
>
> There is a symbolic string "what is right?" and what it refers to, the
>> thing that we are trying to instantiate in the world.
>
>
> Different groups have different goals. For in-your-face outrageous example,
> a major reason that most people now die during their second half-century is
> because of our quaint social practice of pairing like-aged couples, thereby
> removing all Darwinian pressure to evolve into longer lived individuals. If
> we were to socially enforce pairing young and old individuals, we could
> reverse this. Of course, there aren't as many old individuals as there are
> young, so the "pairing" would have to include more young people. Of course,
> all this flies TOTALLY in the face of all prevailing shitforbrains
> religions, even though it was originally practiced by Abraham. Any
> "friendly" AGI would work AGAINST extending lifespan in such ways because
> its subgoals would prohibit it from working against the younger majority to
> restrict their freedom of choice to live with the mates they prefer.
>
> Of course, there are always the "best" advocates (me among them) whose AGI
> would possibly be more like Colossus. BTW, has anyone here read the 2nd and
> 3rd books in the Colossus trilogy yet? They reverse some of the lessons of
> the first book/movie that people often comment on here.
>
> The whole
>> process of  answering the question is the meaning of life, it is what
>> we want to do for the rest of eternity (it is roughly a definition of
>> "right" rather than over-the-top extrapolation from it).
>
>
> IMHO, a primary reason for an AGI is to see past present human prejudices
> and make better decisions, which greatly favors "best" over "right". Indeed,
> this uses "best" to discover the errors in "right", whereas you would
> (apparently attempt to) work the other way.
>
> It is an
>> immensely huge object, and we know very little about it, like we know
>> very little about the form of a Mandelbrot set from the formula that
>> defines it, even though it entirely unfolds from this little formula.
>> What

Re: [agi] What is Friendly AI?

2008-08-31 Thread Vladimir Nesov
"Right", as I used it, flows from "meaning of life", not other way
around. Ask yourself: what do you want? How do you make sure that you
don't screw up the future by making a snappy decision, by asserting
something wrong, that you would've regretted if you knew the
consequences, were smarter or more morally grown-up? And then figure
out how to establish the dynamics that will give you second chances,
that will lift the weight of responsibility for the whole future from
your personal decisions, from your moral prejudices, from your
cultural environment, and yet, in the end, will find what should be.
No one should have this power, we need to have a chance to grow up
before making the decisions that shape the future.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-08-31 Thread Steve Richfield
Vladamir,

On 8/31/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> "Right", as I used it, flows from "meaning of life", not other way
> around.


It sounds like you are agreeing with my assertion that it is better to go
with "best" than with some preconceived notion of "right"?

Ask yourself: what do you want? How do you make sure that you
> don't screw up the future by making a snappy decision, by asserting
> something wrong, that you would've regretted if you knew the
> consequences,


Doesn't this all fall out of the Beyesian computations?

were smarter or more morally grown-up?


Could you define "more morally grown-up"?

And then figure
> out how to establish the dynamics that will give you second chances,


Again, this should flow from the Beyesian computations.

that will lift the weight of responsibility for the whole future from
> your personal decisions,


Why bother?

from your moral prejudices,


Isn't that in part why we seek to build an AGI?

from your
> cultural environment,


Isn't that why our AGI would have a worldwide presence?

and yet, in the end, will find what should be.


By what standard?

No one should have this power,


Why not? The challenge is to avoid misuse of such power, as we have avoided
misusing nuclear weapons. Note the absence of world wars since Herman Kahn's
invention of MAD (Mutually Assured Destruction).

we need to have a chance to grow up
> before making the decisions that shape the future.


I think that we have already had that chance, and have generally blown it.

Eric, I see the above as being highly steeped in a mixture of parochial
political correctness and Christianity, yet I seem to be unable to find the
words to communicate this beyond asking the countless unanswerable questions
that flow from such misconceptions. Can you help here?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

Hi Vlad,

Thanks for the response. It seems that you're advocating an incremental 
approach *towards* FAI, the ultimate goal being full attainment of 
Friendliness... something you express as fraught with difficulty but not 
insurmountable. As you know, I disagree that it is attainable, because it is 
not possible in principle to know whether something that considers itself 
Friendly actually is. You have to break a few eggs to make an omelet, as the 
saying goes, and Friendliness depends on whether you're the egg or the cook.

Terren

--- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> From: Vladimir Nesov <[EMAIL PROTECTED]>
> Subject: [agi] What is Friendly AI?
> To: agi@v2.listbox.com
> Date: Saturday, August 30, 2008, 1:53 PM
> On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam
> <[EMAIL PROTECTED]> wrote:
> > --- On Sat, 8/30/08, Vladimir Nesov
> <[EMAIL PROTECTED]> wrote:
> >
> >> You start with "what is right?" and end
> with
> >> Friendly AI, you don't
> >> start with "Friendly AI" and close the
> circular
> >> argument. This doesn't
> >> answer the question, but it defines Friendly AI
> and thus
> >> "Friendly AI"
> >> (in terms of "right").
> >
> > In your view, then, the AI never answers the question
> "What is right?".
> > The question has already been answered in terms of the
> algorithmic process
> > that determines its subgoals in terms of Friendliness.
> 
> There is a symbolic string "what is right?" and
> what it refers to, the
> thing that we are trying to instantiate in the world. The
> whole
> process of  answering the question is the meaning of life,
> it is what
> we want to do for the rest of eternity (it is roughly a
> definition of
> "right" rather than over-the-top extrapolation
> from it). It is an
> immensely huge object, and we know very little about it,
> like we know
> very little about the form of a Mandelbrot set from the
> formula that
> defines it, even though it entirely unfolds from this
> little formula.
> What's worse, we don't know how to safely establish
> the dynamics for
> answering this question, we don't know the formula, we
> only know the
> symbolic string, "formula", that we assign some
> fuzzy meaning to.
> 
> There is no final answer, and no formal question, so I use
> question-answer pairs to describe the dynamics of the
> process, which
> flows from question to answer, and the answer is the next
> question,
> which then follows to the next answer, and so on.
> 
> With Friendly AI, the process begins with the question a
> human asks to
> himself, "what is right?". From this question
> follows a technical
> solution, initial dynamics of Friendly AI, that is a device
> to make a
> next step, to initiate transferring the dynamics of
> "right" from human
> into a more reliable and powerful form. In this sense,
> Friendly AI
> answers the question of "right", being the next
> step in the process.
> But initial FAI doesn't embody the whole dynamics, it
> only references
> it in the humans and learns to gradually transfer it, to
> embody it.
> Initial FAI doesn't contain the content of
> "right", only the structure
> of absorb it from humans.
> 
> Of course, this is simplification, there are all kinds of
> difficulties. For example, this whole endeavor needs to be
> safeguarded
> against mistakes made along the way, including the mistakes
> made
> before the idea of implementing FAI appeared, mistakes in
> everyday
> design that went into FAI, mistakes in initial stages of
> training,
> mistakes in moral decisions made about what
> "right" means. Initial
> FAI, when it grows up sufficiently, needs to be able to
> look back and
> see why it turned out to be the way it did, was it because
> it was
> intended to have a property X, or was it because of some
> kind of
> arbitrary coincidence, was property X intended for valid
> reasons, or
> because programmer Z had a bad mood that morning, etc.
> Unfortunately,
> there is no objective morality, so FAI needs to be made
> good enough
> from the start to eventually be able to recognize what is
> valid and
> what is not, reflectively looking back at its origin, with
> all the
> depth of factual information and optimization power to run
> whatever
> factual queries it needs.
> 
> I (vainly) hope this answered (at least some of the) other
> questions as well.
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causal

Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Hi Vlad,
>
> Thanks for the response. It seems that you're advocating an incremental
> approach *towards* FAI, the ultimate goal being full attainment of 
> Friendliness...
> something you express as fraught with difficulty but not insurmountable.
> As you know, I disagree that it is attainable, because it is not possible in
> principle to know whether something that considers itself Friendly actually
> is. You have to break a few eggs to make an omelet, as the saying goes,
> and Friendliness depends on whether you're the egg or the cook.
>

Sorry Terren, I don't understand what you are trying to say in the
last two sentences. What does "considering itself Friendly" means and
how it figures into FAI, as you use the phrase? What (I assume) kind
of experiment or arbitrary decision are you talking about?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

Hey Vlad - 

By "considers itself Friendly", I'm refering to an FAI that is renormalizing in 
the sense you suggest. It's an "intentional stance" interpretation of what it's 
doing, regardless of whether the FAI is actually "considering itself Friendly", 
whatever that would mean.

I'm asserting that if you had an FAI in the sense you've described, it wouldn't 
be possible in principle to distinguish it with 100% confidence from a rogue 
AI. There's no "Turing Test for Friendliness".

Terren

--- On Wed, 9/3/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> From: Vladimir Nesov <[EMAIL PROTECTED]>
> Subject: Re: [agi] What is Friendly AI?
> To: agi@v2.listbox.com
> Date: Wednesday, September 3, 2008, 5:04 PM
> On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam
> <[EMAIL PROTECTED]> wrote:
> >
> > Hi Vlad,
> >
> > Thanks for the response. It seems that you're
> advocating an incremental
> > approach *towards* FAI, the ultimate goal being full
> attainment of Friendliness...
> > something you express as fraught with difficulty but
> not insurmountable.
> > As you know, I disagree that it is attainable, because
> it is not possible in
> > principle to know whether something that considers
> itself Friendly actually
> > is. You have to break a few eggs to make an omelet, as
> the saying goes,
> > and Friendliness depends on whether you're the egg
> or the cook.
> >
> 
> Sorry Terren, I don't understand what you are trying to
> say in the
> last two sentences. What does "considering itself
> Friendly" means and
> how it figures into FAI, as you use the phrase? What (I
> assume) kind
> of experiment or arbitrary decision are you talking about?
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Steve Richfield
Vladamir,

On 9/3/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam <[EMAIL PROTECTED]>
> wrote:
> >
> > Hi Vlad,


I wonder if the use of "Vlad" (which brings to mind Vlad the Impaler) was
intentional here. The original Vlad publicly impaled his objectors to bring
about domestic tranquility. If domestic tranquility is all that is wanted,
this can often be achieved by impaling your objectors.

>
> > Thanks for the response. It seems that you're advocating an incremental
> > approach *towards* FAI, the ultimate goal being full attainment of
> Friendliness...
> > something you express as fraught with difficulty but not insurmountable.
> > As you know, I disagree that it is attainable, because it is not possible
> in
> > principle to know whether something that considers itself Friendly
> actually
> > is. You have to break a few eggs to make an omelet, as the saying goes,
> > and Friendliness depends on whether you're the egg or the cook.
> >
>
> Sorry Terren, I don't understand what you are trying to say in the
> last two sentences. What does "considering itself Friendly" means and
> how it figures into FAI, as you use the phrase?


If the goal is friendliness and the AGI perceives itself as attaining that
goal, then shouldn't it perceive itself as friendly?

What (I assume) kind
> of experiment or arbitrary decision are you talking about?


OK, lets take a concrete example: The Middle East situation, and ask our
infinitely intelligent AGI what to do about it. I will list some
possibilities for you to choose among, or you can state another possibility
that you think better fits your "friendly" AGI.

1.  Refuse to consider problems like this, because the choice may determine
who lives and who dies, and that just wouldn't be "friendly".
2.  Help the Israeli's by showing methods of eliminating the Palestinians,
who even if successfully occupied, will still soon outnumber and outvote the
Israelis in Israel, thereby ending Jewish government in Israel.
3.  Help the Palestinians, who have had ~85% of their land summarily taken
by an invading and subjugating power, and find some way of ejecting the
invading Israelis.
4.  Seeing no solution where countless smart leaders have previously failed,
just stay out of it and let them continue killing each other.
5.  Under threat of being annihilated by your friendly AGI, force them to
consider reasonable alternatives that are now presently politically
unacceptable to both sides.
6.  ???

OK, so what is the "friendly" thing for an AGI to do? If I were "the man in
the box" I would opt for #5 above. What would you do?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> I'm asserting that if you had an FAI in the sense you've described, it 
> wouldn't
> be possible in principle to distinguish it with 100% confidence from a rogue 
> AI.
> There's no "Turing Test for Friendliness".
>

You design it to be Friendly, you don't generate an arbitrary AI and
then test it. The latter, if not outright fatal, might indeed prove
impossible as you suggest, which is why there is little to be gained
from AI-boxes.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?

--- On Wed, 9/3/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> From: Vladimir Nesov <[EMAIL PROTECTED]>
> Subject: Re: [agi] What is Friendly AI?
> To: agi@v2.listbox.com
> Date: Wednesday, September 3, 2008, 6:11 PM
> On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
> <[EMAIL PROTECTED]> wrote:
> >
> > I'm asserting that if you had an FAI in the sense
> you've described, it wouldn't
> > be possible in principle to distinguish it with 100%
> confidence from a rogue AI.
> > There's no "Turing Test for
> Friendliness".
> >
> 
> You design it to be Friendly, you don't generate an
> arbitrary AI and
> then test it. The latter, if not outright fatal, might
> indeed prove
> impossible as you suggest, which is why there is little to
> be gained
> from AI-boxes.
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Steve Richfield <[EMAIL PROTECTED]> wrote:

>OK, lets take a concrete example: The Middle East situation,
>and ask our infinitely intelligent AGI what to do about it.

OK, lets take a concrete example of friendly AI, such as competitive message 
routing ( http://www.mattmahoney.net/agi.html ). CMR has an algorithmically 
complex definition of "friendly". The behavior of billions of peers (narrow-AI 
specialists) are controlled by their human owners who have an economic 
incentive to trade cooperatively and provide useful information. Nevertheless, 
the environment is hostile, so a large fraction (probably most) of CPU cycles 
and knowledge will probably be used to defend against attacks, primarily spam.

CMR is friendly AGI because a lot of narrow-AI specialists that understand just 
enough natural language to do their jobs and know just a little about where to 
route other messages will result (I believe) in a system that is generally 
useful as a communication medium to humans. You would just enter any natural 
language message and it would get routed to anyone who cares, human or machine.

So to answer your question, CMR would not solve the Middle East conflict. It is 
not designed to. That is for people to do. Forcing people to do anything is not 
friendly.

CMR is friendly in the sense that a market is friendly. A market can sell 
weapons to both sides, but markets also reward cooperation. Countries that 
trade with each other have an incentive not to go to war. Likewise, the 
internet can be used to plan attacks and promote each sides' agenda, but also 
to make it easier for the two sides to communicate.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Terren Suydam <[EMAIL PROTECTED]> wrote:

> I'm talking about a situation where humans must interact
> with the FAI without knowledge in advance about whether it
> is Friendly or not. Is there a test we can devise to make
> certain that it is?

No. If an AI has godlike intelligence, then testing whether it is friendly 
would be like an ant proving that you won't step on it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread j.k.

On 09/03/2008 05:52 PM, Terren Suydam wrote:

I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?


   


This seems extremely unlikely. Consider that any set of interactions you 
have with a machine you deem friendly could have been with a genuinely 
friendly machine or with an unfriendly machine running an emulation of a 
friendly machine in an internal sandbox, with the unfriendly machine 
acting as man in the middle.


If you have only ever interacted with party B, how could you determine 
if party B is relaying your questions to party C and returning party C's 
responses to you or interacting with you directly -- given that all 
real-world solutions like timing responses against expected response 
times and trying to check for outgoing messages are not possible? Unless 
you understood party B's programming perfectly and had absolute control 
over its operation, you could not. And if you understood its programming 
that well, you wouldn't have to interact with it to determine if it is 
friendly or not.


joseph


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Steve Richfield
Terren,

On 9/3/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
>
> I'm talking about a situation where humans must interact with the FAI
> without knowledge in advance about whether it is Friendly or not. Is there a
> test we can devise to make certain that it is?


Like religions based on "friendly" prophets that eventually lead their
following astray, past action can be no guarantee of future safety despite
the best of intentions. Certainly, the Middle East situation is proof of
this, as all three monotheistic religions are now doing really insane things
to confom to their religious teachings. I suspect that a *successful* FAI
will make these same sorts of errors.

I believe that there are VERY clever ways of correcting even the most awful
of problematic situations using advanced forms of logic like reverse
reductio ad absurdum. However, I have neither following nor prior success to
support this, so this remains my own private conviction.

Steve Richfield

--- On Wed, 9/3/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> > From: Vladimir Nesov <[EMAIL PROTECTED]>
> > Subject: Re: [agi] What is Friendly AI?
> > To: agi@v2.listbox.com
> > Date: Wednesday, September 3, 2008, 6:11 PM
> > On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
> > <[EMAIL PROTECTED]> wrote:
> > >
> > > I'm asserting that if you had an FAI in the sense
> > you've described, it wouldn't
> > > be possible in principle to distinguish it with 100%
> > confidence from a rogue AI.
> > > There's no "Turing Test for
> > Friendliness".
> > >
> >
> > You design it to be Friendly, you don't generate an
> > arbitrary AI and
> > then test it. The latter, if not outright fatal, might
> > indeed prove
> > impossible as you suggest, which is why there is little to
> > be gained
> > from AI-boxes.
> >
> > --
> > Vladimir Nesov
> > [EMAIL PROTECTED]
> > http://causalityrelay.wordpress.com/
> >
> >
> > ---
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-04 Thread Valentina Poletti
On 8/31/08, Steve Richfield <[EMAIL PROTECTED]> wrote:


>  "Protective" mechanisms to restrict their thinking and action will only
> make things WORSE.
>


Vlad, this was my point in the control e-mail, I didn't express it quite as
clearly, partly because coming from a different background I use a slightly
different language.

Also, Steve made another good point here: loads of people at any moment do
whatever they can to block the advancement and progress of human beings as
it is now. How will *those* people react to a progress as advanced as AGI?
That's why I keep stressing the social factor in intelligence as very
important part to consider.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-04 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:02 PM, Valentina Poletti <[EMAIL PROTECTED]> wrote:
>
> Vlad, this was my point in the control e-mail, I didn't express it quite as
> clearly, partly because coming from a different background I use a slightly
> different language.
>
> Also, Steve made another good point here: loads of people at any moment do
> whatever they can to block the advancement and progress of human beings as
> it is now. How will those people react to a progress as advanced as AGI?
> That's why I keep stressing the social factor in intelligence as very
> important part to consider.
>

No, it's not important, unless these people start to pose a serious
threat to the project. You need to care about what is the correct
answer, not what is a popular one, in the case where popular answer is
dictated by ignorance.

P.S. AGI? I'm again not sure what we are talking about here.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-05 Thread Steve Richfield
Vladamir,

On 9/4/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Thu, Sep 4, 2008 at 12:02 PM, Valentina Poletti <[EMAIL PROTECTED]>
> wrote:
> > Also, Steve made another good point here: loads of people at any moment
> do
> > whatever they can to block the advancement and progress of human beings
> as
> > it is now. How will those people react to a progress as advanced as AGI?
> > That's why I keep stressing the social factor in intelligence as very
> > important part to consider.
>
> No, it's not important, unless these people start to pose a serious
> threat to the project.


Here we are, lunch-money funded, working on the project with the MOST
economic potential of any project in the history of man. NO ONE will invest
the few millions needed to check out the low hanging fruit and kick this
thing into high gear. Sure, no one is holding guns to investors' heads and
saying "don't invest", but neither is it socially acceptable to invest in
such directions. That social system is crafted by the Christian majority
here in the U.S. Hence, I see U.S. Christians as being a THE really SERIOUS
threat to AGI.



> You need to care about what is the correct
> answer, not what is a popular one, in the case where popular answer is
> dictated by ignorance.


As Reverse Reductio ad Absurdum shows ever so well, you can't even
understand the answers without some education. This is akin to learning that
a Game Theory solution consists of a list of probabilities, with the final
decision being made as a weighted random decision. Hence, there appears to
be NO prospect of an AGI being useful to people who lack this sort of
education, as nearly all of the population and all of the world leaders now
lack. Given a ubiquitous understanding of these principles, people are
probably smart enough to figure things out for themselves, so AGIs may
not even be needed.


Most disputes are NOT about what is the "best answer", but rather about what
the goal is. Special methods like Reverse Reductio ad Absurdum are needed in
situations with conflicting goals.

The Koran states that most evil is done by people who think they are doing
good. However, Christians, seeing another competing religious book as itself
being evil, reject all of the wisdom therein, and in their misdirected
actions, confirm this very statement. When the majority of people reject
wisdom simply because of its source, and AGIs must necessarily displace
religions as they identify the misstatements made therein, it seems pretty
obvious to me that a war without limits lies ahead between the Christian
majority and AGIs.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com