Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > Perhaps "worm" is the wrong word.  Unlike today's computer worms, it would
> be
> > intelligent, it would evolve, and it would not necessarily be controlled
> by or
> > serve the interests of its creator.  Whether or not it is malicious would
> > depend on the definitions of "good" and "bad", which depend on who you
> ask.  A
> > posthuman might say the question is meaningless.
> 
> So far, this just repeats the same nonsense:  your scenario is based on 
> unsupported assumptions.

OK, let me use the term "mass extinction".  The first AGI that implements RSI
is so successful that it kills off all its competition.

> The question of "knowing what we mean by 'friendly'" is not relevant, 
> because this kind of "knowing" is explicit declarative knowledge.

I can accept that an AGI can have empathy toward humans, although no two
people will agree exactly on what this means.

> > 6. RSI is deterministic.
> 
> Not correct.

This is the only point where we disagree, and my whole argument depends on it.

> The factors that make a collection of free-floating atoms, in a 
> zero-gravity environment) tend to coalesce into a sphere are not 
> "deterministic" in any relevant sense of the term.  A sphere forms 
> because a RELAXATION of all the factors involved ends up in the same 
> shape every time.
> 
> If you mean any other sense of "deterministic" then you must clarify.

I mean in the sense that if RSI was deterministic, then a parent AGI could
predict a child's behavior in any given situation.  If the parent knew as much
as the child, or had the capacity to know as much as the child could know,
then what is the point of RSI?


> > Which part of my interpretation or my argument do you disagree with?
> 
> "Increasing intelligence requires increasing algorithmic complexity."
> 
> If its motivation system is built the way that I describe it, this is of 
> no relevance.

Instead of the fuzzy term "intelligence" let me say "amount of knowledge"
which most people would agree is correlated with intelligence.  Behavior
depends not just on goals but also on what you know.  A child AGI may have
empathy toward humans just like its parent, but may have a slightly different
idea of what it means to be human.

> "We know that a machine cannot output a description of another machine 
> with greater complexity."
> 
> When would it ever need to do such a thing?  This factoid, plucked from 
> computational theory, is not about "description" in the normal 
> scientific and engineering sense, it is about containing a complete copy 
> of the larger system inside the smaller.  I, a mere human, can 
> "describe" the sun and its dynamics quite well, even though the sun is a 
> system far larger and more complex than myself.  In particular, I can 
> give you some beyond-reasonable-doubt arguments to show that the sun 
> will retain its spherical shape for as long as it is on the Main 
> Sequence, without *ever* changing its shape to resemble Mickey Mouse. 
> Its shape is stable in exactly the same way that an AGI motivation 
> system would be stable, in spite of the fact that I cannot "describe" 
> this large system in the strict, compututational sense in which some 
> systems "describe" other systems.

Your model of the sun does not include the position of every atom.  It has
less algorithmic complexity than your brain.  Why is your argument relevant?



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Harshad RJ wrote:


On Feb 18, 2008 10:11 PM, Richard Loosemore <[EMAIL PROTECTED] 
> wrote:



You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself.  Why do
you assume this?  Because an AGI that was motivated only to seek
electricity and pheromones is going to be as curious, as active, as
knowledge seeking, as exploratory (etc etc etc) as a moth that has been
preprogrammed to go towards bright lights.  It will never learn aything
by itself because you left out the [curiosity] motivation (and a lot
else besides!).  



I think your reply points back to the confusion between intelligence and 
motivation. "Curiosity" would be a property of intelligence and not 
motivation. After all, you need a motivation to be curious. Moreover, 
the curiosity would be guided by the kind of motivation. A benevolent 
motive would drive the curiosity to seek benevolent solutions, like say 
solar power, while a malevolent motive could drive it to seek 
destructive ones.


No confusion, really.  I do understand that "curiosity" is a difficult 
case that lies on the borderline, but what I am talking about is 
systematic exploration-behavior, or playing.  The kind of activity that 
children and curious adults engage in when they deliberately try to find 
something out *because* they feel a curiosity urge (so to speak).


What I think you are referring to is just the understanding-mechanisms 
that enable the "intelligence" part of the mind to solve problems or 
generally find things out.  Let's call this intelligence-mechanism a 
[Finding-Out] activity, whereas the type of thing children do best is 
[Curiosity], which is a motivation mode that they get into.


Then, using that terminology on your above paragraph:

"After all, you need a motivation to be curious" translates into "You 
need a motivation of some sort to engage in [Finding-Out]."  For 
example, before you try to figure out where a particular link is located 
on a web page, you need the (general) motivation that is pushing you to 
do this, as well as the (specific) goal that drives you to find that 
particular link.


"Moreover, the curiosity would be guided by the kind of motivation" 
translates into "The [Finding-Out] activity would be guided by the 
background motivation.  This is what I have just said.


"A benevolent motive would drive the curiosity to seek benevolent 
solutions, like say solar power, while a malevolent motive could drive 
it to seek destructive ones."   This translates into  "A benevolent 
motivation (and this really is a motivation, in my terminology) would 
drive the [Finding-Out] mechanisms to seek benevolent solutions, like 
say solar power, while a malevolent motivation (again, I would agree 
that this is a motivation) could drive the [Finding-Out] mechanisms to 
seek destructive ones."


What this all amounts to is that the thing I referred to as "curiosity" 
really is a motivation, because a creature that has an unstructured, 
background desire (a motivation) to find out about the world will 
acquire a lot of background knowledge and become smart.




I see motivation as a much more basic property of intelligence. It needs 
to answer "why" not "what" or "how".
 


But when we try to get an AGI to have the kind of structured behavior
necessary to learn by itself, we discover . what?  That you cannot
have that kind of structured exploratory behavior without also having an
extremely sophisticated motivation system.


So, in the sense that I mentioned above, why do you say/imply that a 
pheromone (or neuro transmitter) based motivation is not sophisticated 
enough? And, without getting your hands messy with chemistry, how do you 
propose to "explain" your emotions to a non-human intelligence? How 
would you distinguish construction from destruction, chaos from order, 
or why two people being able to eat a square meal is somehow better than 
2 million reading Dilbert comics.


I frankly don't know if understand the question.

We already have creatures that seek nothing but chemical signals: 
amoebae do this.


Imagine a human baby that did nothing but try to sniff out breast milk: 
 it would never develop because it would never do any of the other 
things, like playing.  It would just sit there and try to sniff for the 
stuff it wanted.





In other words you cannot have your cake and eat it too:  you cannot
assume that this hypothetical AGI is (a) completely able to build its
own understanding of the world, right up to the human level and beyond,
while also being (b) driven by an extremely dumb motivation system that
makes the AGI seek only a couple of simple goals.


In fact, I do think a & b are together possible and they best describe 
how human brains work. Our motivation system is extremely "dumb": 
reproduction! And it is expressed with nothing more than a feed back 
l

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:

On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations.  Seems hard to believe, but for various technical
reasons I think we can make a very powerful case that this is exactly
what will happen.  After that, every other AGI will be the same way
(again, there is an argument behind that).  Furthermore, there will not
be any "evolutionary" pressures going on, so we will not find that (say)
the first few million AGIs are built with perfect motivations, and then
some rogue ones start to develop.

In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first

AGI

to take the form of a worm.
That scenario is deeply implausible - and you can only continue to 
advertise it because you ignore all of the arguments I and others have 
given, on many occasions, concerning the implausibility of that scenario.


You repeat this line of black propaganda on every occasion you can, but 
on the other hand you refuse to directly address the many, many reasons 
why that black propaganda is nonsense.


Why?


Perhaps "worm" is the wrong word.  Unlike today's computer worms, it would be
intelligent, it would evolve, and it would not necessarily be controlled by or
serve the interests of its creator.  Whether or not it is malicious would
depend on the definitions of "good" and "bad", which depend on who you ask.  A
posthuman might say the question is meaningless.


So far, this just repeats the same nonsense:  your scenario is based on 
unsupported assumptions.





If I understand your proposal, it is:
1. The first AGI to achieve recursive self improvement (RSI) will be friendly.


For a variety of converging reasons, yes.



2. "Friendly" is hard to define, but because the AGI is intelligent, it would
know what we mean and get it right.


No, not correct.  "Friendly" is not hard to define if you build the AGI 
with a full-fledged motivation system of the "diffuse" sort I have 
advovcated before.  To put it in a nutshell, the AGI can be made to have 
a primary motivation that involves empathy with the human species as a 
whole, and what this do in practice is that the AGI would stay locked in 
sync with the general desires of the human race.


The question of "knowing what we mean by 'friendly'" is not relevant, 
because this kind of "knowing" is explicit declarative knowledge.




3. The goal system is robust because it is described by a very large number of
soft constraints.


Correct.  The motivation system, to be precise, depends for its 
stability on a large number of interconnections, so trying to divert it 
from its main motivation would be like unscrambling an egg.




4. The AGI would not change the motivations or goals of its offspring because
it would not want to.


Exactly.  It would not just not change them, it would take active steps 
to ensure that any other AGI would have exactly the same safeguards in 
its system that it (the mother) would have.




5. The first AGI to achieve RSI will improve its intelligence so fast that all
competing systems will be left far behind.  (Thus, a "worm").


No, not thus a worm.  It will simply be an AGI.  The concept of a 
computer worm is so far removed from this AGI that it is misleading to 
recruit the term.



6. RSI is deterministic.


Not correct.

The factors that make a collection of free-floating atoms, in a 
zero-gravity environment) tend to coalesce into a sphere are not 
"deterministic" in any relevant sense of the term.  A sphere forms 
because a RELAXATION of all the factors involved ends up in the same 
shape every time.


If you mean any other sense of "deterministic" then you must clarify.



My main point of disagreement is 6.  Increasing intelligence requires
increasing algorithmic complexity.  We know that a machine cannot output a
description of another machine with greater complexity.  Therefore
reproduction is probabilistic and experimental, and RSI is evolutionary.  Goal
reproduction can be very close but not exact.  (Although the AGI won't want to
change the goals, it will be unable to reproduce them exactly because goals
are not independent of the rest of the system).  Because RSI is very fast,
goals can change very fast.  The only stable goals in evolution are those that
improve fitness and reproduction, e.g. efficiency and acquisition of computing
resources.

Which part of my interpretation or my argument do you disagree with?


The last paragraph!  To my mind, this is a wild, free-wheeling 
non-sequiteur that ignores all the parameters laid down in the preceding 
paragraphs:



"Increasing intelligence requires increasing algorithmic complexity."

If its motivation system is built the way that I describe it, this is o

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Bob Mottram wrote:

On 18/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote:

... might be true. Yes, a motivation of some form could be coded into
the system, but the paucity of expression in the level at which it is
coded, may still allow for "unintended" motivations to emerge out.



It seems that in the AGI arena much emphasis is put on designing goal
systems.  But in nature behavior is not always driven explicitly by
goals.  A lot of behavior I suspect is just drift, and understanding
this requires you to examine the dynamics of the system.  For example
if I'm talking on the phone and doodling with a pen this doesn't
necessarily imply that I explicitly have instantiated a goal of "draw
doodle".  Likewise within populations changes in the gene pool do not
necessarily mean that explicit selection forces are at work.

My supposition is that the same dynamics seen in natural systems will
also apply to AGIs, since these are all examples of complex dynamical
systems.


Ooops: the above quote was attached to my name in error:  I believe 
Harshad wrote that, not I.



But regarding your observation, Bob:  I have previously avocated a 
distinction between "diffuse motivation systems" and "goal-stack 
systems".   As you say, most AI systems simply assume that what controls 
the AI is a goal stack.


I will write up this distinction on a web page shortly.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Bob Mottram
On 18/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > ... might be true. Yes, a motivation of some form could be coded into
> > the system, but the paucity of expression in the level at which it is
> > coded, may still allow for "unintended" motivations to emerge out.


It seems that in the AGI arena much emphasis is put on designing goal
systems.  But in nature behavior is not always driven explicitly by
goals.  A lot of behavior I suspect is just drift, and understanding
this requires you to examine the dynamics of the system.  For example
if I'm talking on the phone and doodling with a pen this doesn't
necessarily imply that I explicitly have instantiated a goal of "draw
doodle".  Likewise within populations changes in the gene pool do not
necessarily mean that explicit selection forces are at work.

My supposition is that the same dynamics seen in natural systems will
also apply to AGIs, since these are all examples of complex dynamical
systems.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> My argument was (at the beginning of the debate with Matt, I believe)
> >> that, for a variety of reasons, the first AGI will be built with
> >> peaceful motivations.  Seems hard to believe, but for various technical
> >> reasons I think we can make a very powerful case that this is exactly
> >> what will happen.  After that, every other AGI will be the same way
> >> (again, there is an argument behind that).  Furthermore, there will not
> >> be any "evolutionary" pressures going on, so we will not find that (say)
> >> the first few million AGIs are built with perfect motivations, and then
> >> some rogue ones start to develop.
> > 
> > In the context of a distributed AGI, like the one I propose at
> > http://www.mattmahoney.net/agi.html this scenario would require the first
> AGI
> > to take the form of a worm.
> 
> That scenario is deeply implausible - and you can only continue to 
> advertise it because you ignore all of the arguments I and others have 
> given, on many occasions, concerning the implausibility of that scenario.
> 
> You repeat this line of black propaganda on every occasion you can, but 
> on the other hand you refuse to directly address the many, many reasons 
> why that black propaganda is nonsense.
> 
> Why?

Perhaps "worm" is the wrong word.  Unlike today's computer worms, it would be
intelligent, it would evolve, and it would not necessarily be controlled by or
serve the interests of its creator.  Whether or not it is malicious would
depend on the definitions of "good" and "bad", which depend on who you ask.  A
posthuman might say the question is meaningless.

If I understand your proposal, it is:
1. The first AGI to achieve recursive self improvement (RSI) will be friendly.
2. "Friendly" is hard to define, but because the AGI is intelligent, it would
know what we mean and get it right.
3. The goal system is robust because it is described by a very large number of
soft constraints.
4. The AGI would not change the motivations or goals of its offspring because
it would not want to.
5. The first AGI to achieve RSI will improve its intelligence so fast that all
competing systems will be left far behind.  (Thus, a "worm").
6. RSI is deterministic.

My main point of disagreement is 6.  Increasing intelligence requires
increasing algorithmic complexity.  We know that a machine cannot output a
description of another machine with greater complexity.  Therefore
reproduction is probabilistic and experimental, and RSI is evolutionary.  Goal
reproduction can be very close but not exact.  (Although the AGI won't want to
change the goals, it will be unable to reproduce them exactly because goals
are not independent of the rest of the system).  Because RSI is very fast,
goals can change very fast.  The only stable goals in evolution are those that
improve fitness and reproduction, e.g. efficiency and acquisition of computing
resources.

Which part of my interpretation or my argument do you disagree with?



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Harshad RJ
On Feb 18, 2008 10:11 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:


> You assume that the system does not go through a learning phase
> (childhood) during which it acquires its knowledge by itself.  Why do
> you assume this?  Because an AGI that was motivated only to seek
> electricity and pheromones is going to be as curious, as active, as
> knowledge seeking, as exploratory (etc etc etc) as a moth that has been
> preprogrammed to go towards bright lights.  It will never learn aything
> by itself because you left out the [curiosity] motivation (and a lot
> else besides!).
>

I think your reply points back to the confusion between intelligence and
motivation. "Curiosity" would be a property of intelligence and not
motivation. After all, you need a motivation to be curious. Moreover, the
curiosity would be guided by the kind of motivation. A benevolent motive
would drive the curiosity to seek benevolent solutions, like say solar
power, while a malevolent motive could drive it to seek destructive ones.

I see motivation as a much more basic property of intelligence. It needs to
answer "why" not "what" or "how".


> But when we try to get an AGI to have the kind of structured behavior
> necessary to learn by itself, we discover . what?  That you cannot
> have that kind of structured exploratory behavior without also having an
> extremely sophisticated motivation system.
>

So, in the sense that I mentioned above, why do you say/imply that a
pheromone (or neuro transmitter) based motivation is not sophisticated
enough? And, without getting your hands messy with chemistry, how do you
propose to "explain" your emotions to a non-human intelligence? How would
you distinguish construction from destruction, chaos from order, or why two
people being able to eat a square meal is somehow better than 2 million
reading Dilbert comics.


In other words you cannot have your cake and eat it too:  you cannot
> assume that this hypothetical AGI is (a) completely able to build its
> own understanding of the world, right up to the human level and beyond,
> while also being (b) driven by an extremely dumb motivation system that
> makes the AGI seek only a couple of simple goals.
>

In fact, I do think a & b are together possible and they best describe how
human brains work. Our motivation system is extremely "dumb": reproduction!
And it is expressed with nothing more than a feed back loop using
neuro-transmitters.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Matt Mahoney wrote:

On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations.  Seems hard to believe, but for various technical
reasons I think we can make a very powerful case that this is exactly
what will happen.  After that, every other AGI will be the same way
(again, there is an argument behind that).  Furthermore, there will not
be any "evolutionary" pressures going on, so we will not find that (say)
the first few million AGIs are built with perfect motivations, and then
some rogue ones start to develop.


In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first AGI
to take the form of a worm.


That scenario is deeply implausible - and you can only continue to 
advertise it because you ignore all of the arguments I and others have 
given, on many occasions, concerning the implausibility of that scenario.


You repeat this line of black propaganda on every occasion you can, but 
on the other hand you refuse to directly address the many, many reasons 
why that black propaganda is nonsense.


Why?




Richard Loosemore




It may indeed be peaceful if it depends on human
cooperation to survive and spread, as opposed to exploiting a security flaw. 
So it seems a positive outcome depends on solving the security problem.  If a

worm is smart enough to debug software and discover vulnerabilities faster
than humans can (with millions of copies working in parallel), the problem
becomes more difficult.  (And this *is* an evolutionary process).  I guess I
don't share Richard's optimism.

I suppose a safer approach would be centralized, like most of the projects of
people on this list.  But I don't see how these systems could compete with the
vastly greater resources (human and computer) already available on the
internet.  A distributed system with, say, Novamente and Google as two of its
millions of peers is certainly going to be more intelligent than either system
alone.

You may wonder why I would design a dangerous system.  First, I am not
building it.  (I am busy with other projects).  But I believe that for
practical reasons something like this will eventually be built anyway, and we
need to study the design to make it safer.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Vladimir Nesov
On Feb 18, 2008 7:41 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> In other words you cannot have your cake and eat it too:  you cannot
> assume that this hypothetical AGI is (a) completely able to build its
> own understanding of the world, right up to the human level and beyond,
> while also being (b) driven by an extremely dumb motivation system that
> makes the AGI seek only a couple of simple goals.
>

Great summary, Richard. You should probably write it up. This position
that there is a very difficult problem of friendly AGI and much
simpler problem of idiotic AGI that still somehow posits a threat is
too easily accepted.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> My argument was (at the beginning of the debate with Matt, I believe)
> that, for a variety of reasons, the first AGI will be built with
> peaceful motivations.  Seems hard to believe, but for various technical
> reasons I think we can make a very powerful case that this is exactly
> what will happen.  After that, every other AGI will be the same way
> (again, there is an argument behind that).  Furthermore, there will not
> be any "evolutionary" pressures going on, so we will not find that (say)
> the first few million AGIs are built with perfect motivations, and then
> some rogue ones start to develop.

In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first AGI
to take the form of a worm.  It may indeed be peaceful if it depends on human
cooperation to survive and spread, as opposed to exploiting a security flaw. 
So it seems a positive outcome depends on solving the security problem.  If a
worm is smart enough to debug software and discover vulnerabilities faster
than humans can (with millions of copies working in parallel), the problem
becomes more difficult.  (And this *is* an evolutionary process).  I guess I
don't share Richard's optimism.

I suppose a safer approach would be centralized, like most of the projects of
people on this list.  But I don't see how these systems could compete with the
vastly greater resources (human and computer) already available on the
internet.  A distributed system with, say, Novamente and Google as two of its
millions of peers is certainly going to be more intelligent than either system
alone.

You may wonder why I would design a dangerous system.  First, I am not
building it.  (I am busy with other projects).  But I believe that for
practical reasons something like this will eventually be built anyway, and we
need to study the design to make it safer.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Harshad RJ wrote:



On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED] 
> wrote:


Harshad RJ wrote:
 > I read the conversation from the start and believe that Matt's
 > argument is correct.

Did you mean to send this only to me?  It looks as though you mean it
for the list.  I will send this reply back to you personally, but let me
know if you prefer it to be copied to the AGI list.


Richard, thanks for replying. I did want to send it to the list.. and 
your email address (as it turns out) was listed on the forum for 
replying to the list.






 > There is a difference between intelligence and motive which Richard
 > seems to be ignoring. A brilliant instance of intelligence could
still
 > be subservient to a malicious or ignorant motive, and I think that is
 > the crux of Matt's argument.

With respect, I was not at all ignoring this point:  this is a
misunderstanding that occurs very frequently, and I thought that I
covered it on this occasion (my apologies if I forgot to do so. I
have had to combat this point on so many previous occasions that I may
have overlooked yet another repeat).

The crucial words are "... could still be subservient to a malicious or
ignorant motive."

The implication behind these words is that, somehow, the "motive" of
this intelligence could arise after the intelligence, as a completely
independent thing over which we had no control.  We are so used to this
pattern in the human case (we can make babies, but we cannot stop the
babies from growing up to be dictators, if that is the way they happen
to go).

This implication is just plain wrong.  



I don't believe so, though your next statement..
 


If you build an artificial
intelligence, you MUST choose how it is motivated before you can even
switch it on. 



... might be true. Yes, a motivation of some form could be coded into 
the system, but the paucity of expression in the level at which it is 
coded, may still allow for "unintended" motivations to emerge out.


Say, for example, the motivation is coded in a form similar to current 
biological systems. The AGI system is motivated to keep itself happy, 
and it is happy when it has sufficient electrical energy at its disposal 
AND when the pheromones from nearby humans are all screaming "positive".


It is easy to see how this kind of motivation could cause unintended 
results. The AGI system could do dramatic things like taking over a 
nuclear power station and manufacturing its own pheromone supply from  a 
chemical plant. Or it could do more subtle things like, manipulating 
government policies to ensure that the above happens!


Even allowing for a higher level of coding for motivation, like those 
Asimov's Robot rules (#1 : Though shall not harm any human), it is very 
easy for the system to go out of hand, since such codings are ambiguous. 
Should "stem cell research" be allowed for example? It might harm some 
embryos but help more number of adults. "Should prostitution be 
legalised?" It might harm the human gene pool in some vague way, or 
might even harm some specific individuals, but it also allows the 
victims themselves to earn some money and survive longer.


So, yes, motivation might be coded, but an AGI system would eventually 
need to have the *capability* to deduce its own motivation, and that 
emergent motivation could be malicious/ignorant.


I quote the rest of the message, only for the benefit of the list. 
Otherwise, my case rests here.





Stepping back for a moment, I think the problem that tends to occur in 
discussions of AGI motivation is that the technical aspects get 
overlooked when we go looking for nightmare scenarios.  What this means, 
for me, is that when I reply to a suggestion such as the one you give 
above, my response is not "That kind of AGI, and AGI behavioral problem, 
is completely unimaginable", but instead what I have to say is "That 
kind of AGI would not actually BE an AGI at all, because, for technical 
reasons, you would never be able to get such a thing to be intelligent 
in the first place".


There is a subtle difference between these two, but what I find is that 
most people mistakenly believe that I am making the first kind of 
reponse instead of the second.


So, to deal with your suggestion in detail.

When I say that some kind of motivation MUST be built into the system, I 
am pretty much uttering a truism:  an AGI without any kind of 
motivational system is like a swimmer with no muscles.  It has to be 
driven to do something, so no drives mean no activity.


Putting that to one side, then, what you propose is an AGI with an 
extremely simple motivational system:  seek electricity and high human 
pheromonal output.


I don't suggest that this is unimaginable (it is!), but what I suggest 
is that you implicitly assume a lot of stuff that, almost certainly, 
wi

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Harshad RJ
On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Harshad RJ wrote:
> > I read the conversation from the start and believe that Matt's
> > argument is correct.
>
> Did you mean to send this only to me?  It looks as though you mean it
> for the list.  I will send this reply back to you personally, but let me
> know if you prefer it to be copied to the AGI list.


Richard, thanks for replying. I did want to send it to the list.. and your
email address (as it turns out) was listed on the forum for replying to the
list.



>
>
> > There is a difference between intelligence and motive which Richard
> > seems to be ignoring. A brilliant instance of intelligence could still
> > be subservient to a malicious or ignorant motive, and I think that is
> > the crux of Matt's argument.
>
> With respect, I was not at all ignoring this point:  this is a
> misunderstanding that occurs very frequently, and I thought that I
> covered it on this occasion (my apologies if I forgot to do so. I
> have had to combat this point on so many previous occasions that I may
> have overlooked yet another repeat).
>
> The crucial words are "... could still be subservient to a malicious or
> ignorant motive."
>
> The implication behind these words is that, somehow, the "motive" of
> this intelligence could arise after the intelligence, as a completely
> independent thing over which we had no control.  We are so used to this
> pattern in the human case (we can make babies, but we cannot stop the
> babies from growing up to be dictators, if that is the way they happen
> to go).
>
> This implication is just plain wrong.


I don't believe so, though your next statement..


> If you build an artificial
> intelligence, you MUST choose how it is motivated before you can even
> switch it on.


... might be true. Yes, a motivation of some form could be coded into the
system, but the paucity of expression in the level at which it is coded, may
still allow for "unintended" motivations to emerge out.

Say, for example, the motivation is coded in a form similar to current
biological systems. The AGI system is motivated to keep itself happy, and it
is happy when it has sufficient electrical energy at its disposal AND when
the pheromones from nearby humans are all screaming "positive".

It is easy to see how this kind of motivation could cause unintended
results. The AGI system could do dramatic things like taking over a nuclear
power station and manufacturing its own pheromone supply from  a chemical
plant. Or it could do more subtle things like, manipulating government
policies to ensure that the above happens!

Even allowing for a higher level of coding for motivation, like those
Asimov's Robot rules (#1 : Though shall not harm any human), it is very easy
for the system to go out of hand, since such codings are ambiguous. Should
"stem cell research" be allowed for example? It might harm some embryos but
help more number of adults. "Should prostitution be legalised?" It might
harm the human gene pool in some vague way, or might even harm some specific
individuals, but it also allows the victims themselves to earn some money
and survive longer.

So, yes, motivation might be coded, but an AGI system would eventually need
to have the *capability* to deduce its own motivation, and that emergent
motivation could be malicious/ignorant.

I quote the rest of the message, only for the benefit of the list.
Otherwise, my case rests here.



>  Nature does this in our case (and nature is very
> insistent that it wants its creations to have plenty of selfishness and
> aggressiveness built into them, because selfish and aggressive species
> survive), but nature does it so quietly that we sometimes think that all
> she does is build an intelligence, then leave the motivations to grow
> however they will.  But what nature does quietly, we have to do
> explicitly.
>
> My argument was (at the beginning of the debate with Matt, I believe)
> that, for a variety of reasons, the first AGI will be built with
> peaceful motivations.  Seems hard to believe, but for various technical
> reasons I think we can make a very powerful case that this is exactly
> what will happen.  After that, every other AGI will be the same way
> (again, there is an argument behind that).  Furthermore, there will not
> be any "evolutionary" pressures going on, so we will not find that (say)
> the first few million AGIs are built with perfect motivations, and then
> some rogue ones start to develop.
>
> So, when you say that "A brilliant instance of intelligence could still
> be subservient to a malicious or ignorant motive" you are saying
> something equivalent to "Toyota could build a car with a big red button
> on the roof, and whenever anyone slapped the button a nuclear weapon
> would go off in the car's trunk."  Technically, yes, I am sure Toyota
> could find a way to do this!  But oing this kind of thing is not an
> automatic consequence (or even a remotely probably 

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Randall Randall


On Jan 28, 2008, at 12:03 PM, Richard Loosemore wrote:
Your comments below are unfounded, and all the worse for being so  
poisonously phrased.  If you read the conversation from the  
beginning you will discover why:  Matt initially suggested the idea  
that an AGI might be asked to develop a virus of maximum potential,  
for purposes of testing a security system, and that it might  
respond by inserting an entire AGI system into the virus, since  
this would give the virus its maximum potential.  The thrust of my  
reply was that his entire idea of Matt's made no sense, since the  
AGI could not be a "general" intelligence if it could not see the  
full implications of the request.


Please feel free to accuse me of gross breaches of rhetorical  
etiquette, but if you do, please make sure first that I really have  
committed the crimes.  ;-)


I notice everyone else has (probably wisely) ignored
my response anyway.

I thought I'd done well at removing the most "poisonously
phrased" parts of my email before sending, but I agree I
should have waiting a few hours and revisited it before
sending, even so.  In any case, changes in meaning due to
sloppy copying of others' arguments are just SOP for most
internet arguments these days.  :(

To bring this slightly back to AGI:

The thrust of my reply was that his entire idea of Matt's made no  
sense, since the AGI could not be a "general" intelligence if it  
could not see the full implications of the request.


I'm sure you know that most humans fail to see the full
implications of *most* things.  Is it your opinion, then,
that a human is not a general intelligence?

--
Randall Randall <[EMAIL PROTECTED]>
"If I can do it in Alabama, then I'm fairly certain you
 can get away with it anywhere." -- Dresden Codak



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90632569-c873ac


[agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Richard Loosemore


Randall,

Your comments below are unfounded, and all the worse for being so 
poisonously phrased.  If you read the conversation from the beginning 
you will discover why:  Matt initially suggested the idea that an AGI 
might be asked to develop a virus of maximum potential, for purposes of 
testing a security system, and that it might respond by inserting an 
entire AGI system into the virus, since this would give the virus its 
maximum potential.  The thrust of my reply was that his entire idea of 
Matt's made no sense, since the AGI could not be a "general" 
intelligence if it could not see the full implications of the request.


Please feel free to accuse me of gross breaches of rhetorical etiquette, 
but if you do, please make sure first that I really have committed the 
crimes.  ;-)




Richard Loosemore







Randall Randall wrote:


I pulled in some extra context from earlier messages to
illustrate an interesting event, here.

On Jan 27, 2008, at 12:24 PM, Richard Loosemore wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:

Suppose you
ask the AGI to examine some operating system or server software to 
look for
security flaws.  Is it supposed to guess whether you want to fix 
the flaws or

write a virus?


If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?


If I hired you as a security analyst to find flaws in a piece of 
software, and
I didn't tell you what I was going to do with the information, how 
would you

know?


This is so silly it is actually getting quite amusing... :-)

So, you are positing a situation in which I am an AGI, and you want to 
hire me as a security analyst, and you say to me:  "Please build the 
most potent virus in the world (one with a complete AGI inside it), 
because I need it for security purposes, but I am not going to tell 
you what I will do with the thing you build."


And we are assuming that I am an AGI with at least two neurons to rub 
together?


How would I know what you were going to do with the information?

I would say "Sorry, pal, but you must think I was born yesterday.  I 
am not building such a virus for you or anyone else, because the 
dangers of building it, even as a test, are so enormous that it would 
be ridiculous.  And even if I did think it was a valid request, I 
wouldn't do such a thing for *anyone* who said 'I cannot tell you what 
I will do with the thing that you build'!"


In the context of the actual quotes, above, the following statement
is priceless.

It seems to me that you have completely lost track of the original 
issue in this conversation, so your other comments are meaningless 
with respect to that original context.


Let's look at this again:


--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:

Suppose you
ask the AGI to examine some operating system or server software to 
look for
security flaws.  Is it supposed to guess whether you want to fix 
the flaws or

write a virus?


If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?


Notice that in Matt's "Is it supposed to guess whether you want to fix the
flaws or write a virus?" there's no suggestion that you're asking the AGI
to write a virus, only that you're asking it for security information.  
Richard
then quietly changes "to" to "it", thereby changing the meaning of the 
sentence
to the form he prefers to argue against (however ungrammatical), and 
then he
manages to finish up by accusing *Matt* of forgetting what Matt 
originally said

on the matter.

--
Randall Randall <[EMAIL PROTECTED]>
"Someone needs to invent a Bayesball bat that exists solely for
 smacking people [...] upside the head." -- Psy-Kosh on reddit.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90627563-22941c