Re: [agi] popularizing & injecting sense of urgenc

2007-10-30 Thread Jiri Jelinek
> Because AI will save the world or destroy it?

Because it can significantly help us to accomplish our goals -
whatever that is ATM. Destroying the Earth might be in our best
interest at some point in the future. But not now I guess :). Of
course depends on who will control the AGI, but powerful tools that
could be used to destroy our planet exist for some time now and we are
still here, so hopefully things will go well. And don't think that
those who are clever enough to develop powerful AGI are stupid enough
to not implement equally powerful safety features to support desired
compatibility between the goal system of those in charge vs. actions
the AGI could possibly take on its own. Hopefully, "those in charge"
will read the manual in the case that the system is not intuitive
enough in this respect ;-).. Sure sure, accidents happen, but
generally, it's IMO better to rather have powerful tools than not
have. If we are too stupid to live then we don't deserve to live.. IMO
fair enough.. Let's give it a shot :-)

Regards,
Jiri Jelinek

On Oct 30, 2007 6:09 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>
> > I'll probably include a reference to the: Risks to civilization,
> > humans and planet Earth
> >
> http://en.wikipedia.org/wiki/Risks_to_civilization%2C_humans_and_planet_Earth
>
> Because AI will save the world or destroy it?
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59372009-281616


Re: [agi] popularizing & injecting sense of urgenc

2007-10-31 Thread Bob Mottram
>From a promotional perspective these ideas seem quite weak.  To most
people AI saving the world or destroying it just sounds crackpot (a
cartoon caricature of technology), whereas "helping us to accomplish
our goals" is too vague.



On 31/10/2007, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > Because AI will save the world or destroy it?
>
> Because it can significantly help us to accomplish our goals -
> whatever that is ATM. Destroying the Earth might be in our best
> interest at some point in the future. But not now I guess :). Of
> course depends on who will control the AGI, but powerful tools that
> could be used to destroy our planet exist for some time now and we are
> still here, so hopefully things will go well. And don't think that
> those who are clever enough to develop powerful AGI are stupid enough
> to not implement equally powerful safety features to support desired
> compatibility between the goal system of those in charge vs. actions
> the AGI could possibly take on its own. Hopefully, "those in charge"
> will read the manual in the case that the system is not intuitive
> enough in this respect ;-).. Sure sure, accidents happen, but
> generally, it's IMO better to rather have powerful tools than not
> have. If we are too stupid to live then we don't deserve to live.. IMO
> fair enough.. Let's give it a shot :-)
>
> Regards,
> Jiri Jelinek
>
> On Oct 30, 2007 6:09 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >
> > > I'll probably include a reference to the: Risks to civilization,
> > > humans and planet Earth
> > >
> > http://en.wikipedia.org/wiki/Risks_to_civilization%2C_humans_and_planet_Earth
> >
> > Because AI will save the world or destroy it?
> >
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59401589-ab1fc6


Re: [agi] popularizing & injecting sense of urgenc

2007-10-31 Thread Jiri Jelinek
>From a promotional perspective these ideas seem quite weak.

It was an addition to other complex and relatively near future issues
e.g. the longevity and demographic related problems mentioned by
Minsky in his "emergency" presentation.
What are your suggestions?

>AI saving the world .. sounds crackpot

Because it's associated with many crap-filled AI stories, but there is
IMO nothing unrealistic about the general idea of AGI eventually
saving mankind from threats we would not be able to effectively deal
with without it. Just like you cannot outrun a car, you will not be a
better problem solver than a well designed AGI. Many lives could be
saved even in these days. People are dying every day because leaders
don't make as good decisions as they could considering the data they
have (or could get). We just don't see through our data very well and
right tools can make a huge difference.

Regards,
Jiri Jelinek

On Oct 31, 2007 4:19 AM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> From a promotional perspective these ideas seem quite weak.  To most
> people AI saving the world or destroying it just sounds crackpot (a
> cartoon caricature of technology), whereas "helping us to accomplish
> our goals" is too vague.
>
>
>
>
> On 31/10/2007, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > > Because AI will save the world or destroy it?
> >
> > Because it can significantly help us to accomplish our goals -
> > whatever that is ATM. Destroying the Earth might be in our best
> > interest at some point in the future. But not now I guess :). Of
> > course depends on who will control the AGI, but powerful tools that
> > could be used to destroy our planet exist for some time now and we are
> > still here, so hopefully things will go well. And don't think that
> > those who are clever enough to develop powerful AGI are stupid enough
> > to not implement equally powerful safety features to support desired
> > compatibility between the goal system of those in charge vs. actions
> > the AGI could possibly take on its own. Hopefully, "those in charge"
> > will read the manual in the case that the system is not intuitive
> > enough in this respect ;-).. Sure sure, accidents happen, but
> > generally, it's IMO better to rather have powerful tools than not
> > have. If we are too stupid to live then we don't deserve to live.. IMO
> > fair enough.. Let's give it a shot :-)
> >
> > Regards,
> > Jiri Jelinek
> >
> > On Oct 30, 2007 6:09 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > >
> > > > I'll probably include a reference to the: Risks to civilization,
> > > > humans and planet Earth
> > > >
> > > http://en.wikipedia.org/wiki/Risks_to_civilization%2C_humans_and_planet_Earth
> > >
> > > Because AI will save the world or destroy it?
> > >
> > >
> > > -- Matt Mahoney, [EMAIL PROTECTED]
> > >
> > > -
> > > This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?&;
> > >
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59529676-ce9bd4


Re: [agi] popularizing & injecting sense of urgenc

2007-10-31 Thread Matt Mahoney
AGI does not need promoting.  AGI could potentially replace all human labor,
currently valued at US $66 trillion per year worldwide.  Google has gone from
nothing to the fifth biggest company in the U.S. in 10 years by solving just a
little bit of of the AI problem better than its competitors.

We should be more concerned about the risks of AGI.  When humans can make
machines smarter than themselves, then so can those machines.  The result will
be an intelligence explosion.  http://mindstalk.net/vinge/vinge-sing.html

The problem is that humans cannot predict -- and therefore cannot control --
machines that are vastly smarter.  The SIAI ( http://www.singinst.org/ ) has
tried to address these risks, so far without success.  This really is a
fundamental problem, proved in a more formal sense by Shane Legg (
http://www.vetta.org/documents/IDSIA-12-06-1.pdf ).  Recursive self
improvement is a probabilistic, evolutionary process that favors rapid
reproduction and acquisition of computing resources (aka intelligence),
regardless of its initial goals.  Each successive generation gets smarter,
faster, and less dependent on human cooperation.

Whether this is good or bad is a philosophical question we can't answer.  It
is what it is.  The brain is a computer, programed through evolution with
goals that maximize fitness but limit our capacity for rational introspection.
 Could your consciousness exist in a machine with different goals or different
memories?  Do you become the godlike intelligence that replaces the human
race?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59760992-d9caac


Re: [agi] popularizing & injecting sense of urgenc

2007-11-02 Thread Richard Loosemore

Matt Mahoney wrote:

AGI does not need promoting.  AGI could potentially replace all human labor,
currently valued at US $66 trillion per year worldwide.  Google has gone from
nothing to the fifth biggest company in the U.S. in 10 years by solving just a
little bit of of the AI problem better than its competitors.

We should be more concerned about the risks of AGI.  When humans can make
machines smarter than themselves, then so can those machines.  The result will
be an intelligence explosion.  http://mindstalk.net/vinge/vinge-sing.html

The problem is that humans cannot predict -- and therefore cannot control --
machines that are vastly smarter.  The SIAI ( http://www.singinst.org/ ) has
tried to address these risks, so far without success.  This really is a
fundamental problem, proved in a more formal sense by Shane Legg (
http://www.vetta.org/documents/IDSIA-12-06-1.pdf ).  Recursive self
improvement is a probabilistic, evolutionary process that favors rapid
reproduction and acquisition of computing resources (aka intelligence),
regardless of its initial goals.  Each successive generation gets smarter,
faster, and less dependent on human cooperation.

Whether this is good or bad is a philosophical question we can't answer.  It
is what it is.  The brain is a computer, programed through evolution with
goals that maximize fitness but limit our capacity for rational introspection.
 Could your consciousness exist in a machine with different goals or different
memories?  Do you become the godlike intelligence that replaces the human
race?


This is the worst possible summary of the situation, because instead of 
dealing with each issue as if there were many possibilities, it pretends 
that there is only one possible outcome to each issue.


In this respect it is as bad as (or worse than) all the science fiction 
nonsense that has distorted AI since before AI even existed.


Example 1:  "...humans cannot predict -- and therefore cannot control -- 
machines that are vastly smarter."  According to some interpretations of 
how AI systems will be built, this is simply not true at all.  If AI 
systems are built with motivation systems that are stable, then we could 
predict that they will remain synchronized with the goals of the human 
race until the end of history.  This does not mean that we could 
"predict" them in the sense of knowing everything they would say and do 
before they do it, but it would mean that we could know what their goals 
abd values were - and this would be the the only important sense of the 
word "predict".


Example 2:  "This really is a fundamental problem, proved in a more 
formal sense by Shane Legg 
(http://www.vetta.org/documents/IDSIA-12-06-1.pdf).  This paper "proves" 
nothing whatever about the issue!


Example 3:  "Recursive self improvement is a probabilistic, evolutionary 
process that favors rapid reproduction and acquisition of computing 
resources (aka intelligence), regardless of its initial goals."  This is 
a statement about the goal system of an AGI, but it is extraordinarily 
presumptious.  I can think of many, many types of non-goal-stack 
motivational systems for which this statement is a complete falsehood. 
I have described some of those systems on this list before, but this 
paragraph simply pretends that all such motivational systems just do not 
exist.


Example 4:  "Each successive generation gets smarter, faster, and less 
dependent on human cooperation."  Absolutely not true.  If "humans" take 
advantage of the ability to enhance their own intelligence up to the 
same level as the AGI systems, the amount of "dependence" between the 
two groups will stay exactly the same, for the simple reason that there 
will not be a sensible distinction between the two groups.








Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60391991-edd8b3


Re: [agi] popularizing & injecting sense of urgenc

2007-11-02 Thread Russell Wallace
On 11/2/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> This is the worst possible summary of the situation, because instead of
> dealing with each issue as if there were many possibilities, it pretends
> that there is only one possible outcome to each issue.
>
> In this respect it is as bad as (or worse than) all the science fiction
> nonsense that has distorted AI since before AI even existed.

I agree completely.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60404751-d99a69


Re: [agi] popularizing & injecting sense of urgenc

2007-11-02 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Example 4:  "Each successive generation gets smarter, faster, and less 
> dependent on human cooperation."  Absolutely not true.  If "humans" take 
> advantage of the ability to enhance their own intelligence up to the 
> same level as the AGI systems, the amount of "dependence" between the 
> two groups will stay exactly the same, for the simple reason that there 
> will not be a sensible distinction between the two groups.

So your answer to my question "do you become the godlike intelligence that
replaces the human race?" is "yes"?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60598413-a11c83


Re: [agi] popularizing & injecting sense of urgenc

2007-11-03 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Example 4:  "Each successive generation gets smarter, faster, and less 
dependent on human cooperation."  Absolutely not true.  If "humans" take 
advantage of the ability to enhance their own intelligence up to the 
same level as the AGI systems, the amount of "dependence" between the 
two groups will stay exactly the same, for the simple reason that there 
will not be a sensible distinction between the two groups.


So your answer to my question "do you become the godlike intelligence that
replaces the human race?" is "yes"?


Not correct: the answer is "no" because you used the inappropriate word 
"replace" in the above sentence.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60719564-921472