Richard,

A day or so ago I told you if you could get me to eat my words with regard
to something involving your global-local disconnect, I would do so with
relish because you would have taught me something valuable.  

Well  -- as certain parts of my below email indicate -- to a small extent
you have gotten me to eat my words, at least partially.  You have provided
me with either some new valuable thoughts or a valuable reminder of some old
ones (I attended a two day seminar on complexity in the early '90s after
reading the popular "Chaos book").  You haven't flipped me around 180
degrees, but you have shifted my compass somewhat.  So I suppose I should
thank you.

My acknowledgement of this shift was indicated in my below email in multiple
places in small ways (such as my statement I had copied you long explanation
of your position to my file of valuable clippings from this list) and in
particular by the immediately following quote, with the relevant portions
capitalized.  

                "ED PORTER=====> SO, NET, NET, RICHARD, RE-READING YOUR
PAPER AND READING YOUR BELOW LONG POST HAVE INCREASED MY RESPECT FOR YOUR
ARGUMENTS.  I AM SOMEWHAT MORE AFRAID OF COMPLEXITY GOTCHAS THAN I WAS TWO
DAYS AGO.  But I still am pretty confident (without anything beginning to
approach proof) such gotchas will not prevent use from making useful human
level AGI within the decade if AGI got major funding"

You may find anything less than total capitulation unsatisfying, but I think
it would improve the quality of exchange on this list if there was more
acknowledgment after an argument of when one shifts their understanding in
response to someone else's arguments, rather than always trying to act as if
one has won every aspect of every argument.

Ed Porter


-----Original Message-----
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 12:57 PM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...

Richard,

You will be happy to note that I have copied the text of your reply to my
"Valuable Clippings From AGI Mailing List file".  Below are some comments.

>RICHARD LOOSEMORE=====> I now understand that you have indeed heard of
complex systems before, but I must insist that in your summary above you
have summarized what they are in such a way that completely contradicts what
they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

ED PORTER=====> Richard, I was citing a relatively stable economies as
exactly what you say they are, an example of a complex system that is
relatively stable.  So why is it that my summary " summarized what they are
in such a way that completely contradicts what they are!"?   I implied that
economies have traditionally had instabilities, such as boom and bust
cycles, and I am aware that even with all our controls, other major
instabilities could strike, in much the same ways that people can have
nervous breakdowns.

ED PORTER=====> With regard to the rest of your paper I find it one of your
better reasoned discussions of the problem of complexity.  I like Ben, agree
it is a potential problem.  I said that in the email you were responding to.
My intuition, like Ben's, tells me we probably be able to deal with it, but
your paper is correct to point out that such intuitions are really largely
guesses.  

>RICHARD LOOSEMORE=====>how can someone know that how much impact the
complexity is going to have, when in the same breath they will admit that
NOBODY currently understands just how much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is a
small amount of complexity and say:  "Well, these folks managed to
understand their systems without getting worried about complexity, so why
don't we assume that our problem is no worse than theirs?"  For example,
someone could point to the dynamics of planetary systems and say that there
is a small bit of complexity there, but it is a relatively small effect in
the grand scheme of things.

ED PORTER=====> A better example would be the world economy.  Its got 6
billion highly autonomous players.  It has all sorts of non-linearities and
complex connections.  Although it has fits and starts is has surprising
stability considering everything that is thrown at it (Not clear how far
this stability will hold into the singularity future) but still it is an
instructive example of how extremely complex things, with lots of
non-linearities, can work relatively well if there are the proper
motivations and controls.

>RICHARD LOOSEMORE=====>Problem with that line of argument is that there are
NO other examples of an engineering system with as much naked funkiness in
the interactions between the low level components.

ED PORTER=====> The key is try to avoid and/or control funkiness in your
components.  Remember that an experiential system derives most of its
behavior by re-enacting, largely through substitutions and
probabilistic-transition-based synthesis, from past experience, with a bias
toward past experiences that have "worked" in some sense meaningful to the
machine.  These creates a tremendous bias toward desirable, vs. funky,
behaviors.

So, net, net, Richard, re-reading your paper and reading your below long
post have increased my respect for your arguments.  I am somewhat more
afraid of complexity gotchas than I was two days ago.  But I still am pretty
confident (without anything beginning to approach proof) such gotchas will
not prevent use from making useful human level AGI within the decade if AGI
got major funding.

But I have been afraid for a long time that even the other type of
complexity (i.e., complication, which often involves some risk of
"complexity") means that it may be very difficult for us humans to keep
control of superhuman-level AGI's for very long, so I have always worried
about that sort of complexity Gotcha.

But with regard to the complexity problem, it seems to me that we should
design systems with an eye to reducing their knarlyness, including planning
multiple types of control systems, and then once we get initial such system
up and running try to find out what sort of complexity problems we have.

Ed Porter



-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 11:46 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:
> Jean-Paul,
> 
> Although complexity is one of the areas associated with AI where I have
less
> knowledge than many on the list, I was aware of the general distinction
you
> are making.  
> 
> What I was pointing out in my email to Richard Loosemore what that the
> definitions in his paper "Complex Systems, Artificial Intelligence and
> Theoretical Psychology," for "irreducible computability" and "global-local
> interconnect" themselves are not totally clear about this distinction, and
> as a result, when Richard says that those two issues are an unavoidable
part
> of AGI design that must be much more deeply understood before AGI can
> advance, by the more loose definitions which would cover the types of
> complexity involved in large matrix calculations and the design of a
massive
> supercomputer, of course those issues would arise in AGI design, but its
no
> big deal because we have a long history of dealing with them.
> 
> But in my email to Richard I said I was assuming he was not using this
more
> loose definitions of these words, because if he were, they would not
present
> the unexpected difficulties of the type he has been predicting.  I said I
> though he was dealing with more the potentially unruly type of complexity,
I
> assume you were talking about.
> 
> I am aware of that type of complexity being a potential problem, but I
have
> designed my system to hopefully control it.  A modern-day well functioning
> economy is complex (people at the Santa Fe Institute often cite economies
as
> examples of complex systems), but it is often amazingly unchaotic
> considering how loosely it is organized and how many individual entities
it
> has in it, and how many transitions it is constantly undergoing.
Unsually,
> unless something bangs on it hard (such as having the price of a major
> commodity all of a sudden triple), it has a fair amount of stability,
while
> constantly creating new winners and losers (which is a productive form of
> mini-chaos).  Of course in the absence of regulation it is naturally prone
> to boom and bust cycles.  

Ed,

I now understand that you have indeed heard of complex systems before, 
but I must insist that in your summary above you have summarized what 
they are in such a way that completely contradicts what they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

I am struggling here, Ed.  I want to go on to explain exactly what I 
mean (and what complex systems theorists mean) but I cannot see a way to 
do it without writing half a book this afternoon.

Okay, let me try this.

Imagine that we got a bunch of computers and connected them with a 
network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program:  it keeps a 
handful of local parameters (U, V, W, X, Y) and it updates the values of 
its own parameters according to what the neighboring machines are doing 
with their parameters.

How does it do the updating?  Well, imagine some really messy and 
bizarre algorithm that involves looking at the neighbors' values, then 
using them to cross reference each other, and introduce delays and 
gradients and stuff.

On the face of it, you might think that the result will be that the U V 
W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just 
random mush, there are some (a noticeably large number in fact) that 
have overall behavior that shows 'regularities'.  For example, much to 
our surprise we might see waves in the U values.  And every time two 
waves hit each other, a vortex is created for exactly 20 minutes, then 
it stops.  I am making this up, but that is the kind of thing that could 
happen.

2) The algorithm is so messy that we cannot do any math to analyse and 
predict the behavior of the system.  All we can do is say that we have 
absolutely no techniques that will allow us to mathematical progress on 
the problem today, and we do not know if at ANY time in future history 
there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be 
"explained" in the normal way.  We see them happening, but we do not 
know why they do.  The bizzare algorithm is the "low level mechanism" 
and the waves and vortices are the "high level behavior", and when I say 
there is a "Global-Local Disconnect" in this system, all I mean is that 
we are completely stuck when it comes to explaining the high level in 
terms of the low level.

Believe me, it is childishly easy to write down equations/algorithms for 
a system like this that are so profoundly intractable that no 
mathematician would even think of touching them.  You have to trust me 
on this.  Call your local Math department at Harvard or somewhere, and 
check with them if you like.

As soon as the equations involve funky little dependencies such as

"Pick two neighbors at random, then pick two parameters at random from 
each of these, and for the next day try to make one of my parameters 
(chosen at random, again) follow the average of those two as they were 
exactly 20 minutes ago, EXCEPT when neighbors 5 and 7 both show the same 
value of the V parameter, in which case drop this algorithm for the rest 
of the day and instead follow the substitute algorithm B...."

Now, this set of computers would be a wicked example of a complex 
system, even while the biggest supercomputer in the world, following a 
nice, well behaved algorithm, would not be complex at all.

The summary of this is as follows:  there are some systems in which the 
interaction of the components are such that we must effectively declare 
that NO THEORY exists that would enable us to predict certain global 
regularities observed in these systems.

In the real world, systems are mixtures of complex and not-complex, of 
course, so don't think that I am saying that cognitive systems are 
completely complex:  I do not say that.

But when a system involves a *significant* amount of complexity, what do 
we do?

More to the point, what do we do when we have good reasons for 
suspecting that some CRUCIAL aspects of a system are going to introduce 
complexity, but we cannot be sure how much of an impact that complexity 
is going to have on the system?

THIS IS THE MOST IMPORTANT THING ABOUT MY ARGUMENT (which is why I am 
putting it so loudly ;-)).  We cannot know exactly how much of an impact 
complexity will have, because we have no way to measure the 'amount" of 
complexity, nor any way to say how much impact we get from a given 
amount of complexity!  So we are in the dark.

Do we suspect that complexity is involved in intelligence.  I could 
present lots of reasoning here, but instead I will resort to quoting 
your favorite AGI researcher.  Ben Goertzel, in a message just a short 
while ago, said

"There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems"

Can I take it as understood that this is accepted, and move on?

So, yes, there is evidence that complexity is involved.

My argument is that when you examine the way that complexity has an 
effect on systems, you find that it can have very quiet, subtle effects 
that do not jump right out at you and say "HERE I AM!", but they just 
lurk in the background and make it quietly impossible for you to get the 
system up above a certain level of functioning.  To be more specific: 
when you really allow the symbol-building mechanisms, and the learning 
mechanisms, and the inference-control mechanisms to do their thing in a 
full scale system, the effects of tiny bits of complexity in the 
underlying design CAN have a huge impact.  One particular design choice, 
for example, could mean the difference between a system that looks like 
it ought to work, but when you set it running autonomously it gradually 
drifts into imbecility without there being any clear reason.

Now I want to qualify that last paragraph in a very important way:  I 
cannot say anything as strong as "complexity WILL have bad effects", I 
can only say that "complexity has the potential to have these effects".

This means that we have a situation in which there is a DANGER that 
complexity will stop us from being able to diagnose why our systems are 
not working, but we cannot quantify or analyse  that danger:  it is a 
great big unknown.

But the fact that it is such an unknown has one big consequence:  if 
someone says that "I have a gut instinct that complexity will not turn 
out to be a problem" then that statement is based on thing but 
guesswork.  If the intution were based on something that the person 
knows, then that means the person "knows" something that is enabling 
them to predict the behavior of a complex system, or understand the 
amount of complexity involved.  That would be bizarre:  how can someone 
know that how much impact the complexity is going to have, when in the 
same breath they will admit that NOBODY currently understands just how 
much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is 
a small amount of complexity and say:  "Well, these folks managed to 
understand their systems without getting worried about complexity, so 
why don't we assume that our problem is no worse than theirs?"  For 
example, someone could point to the dynamics of planetary systems and 
say that there is a small bit of complexity there, but it is a 
relatively small effect in the grand scheme of things.

Problem with that line of argument is that there are NO other examples 
of an engineering system with as much naked funkiness in the 
interactions between the low level components.  It is as simple as that. 
  Planets simply don't cut it:  the funky interactions there are 
ridiculously small compared with what we know exists in intelligence.

Try to think of some other example where we have tried to build a system 
that behaves in a certain overall way, but we started out by using 
components that interacted in a completely funky way, and we succeeded 
in getting the thing working in the way we set out to.  In all the 
history of engineering there has never been such a thing.

Conclusion:  there is a danger that the complexity that even Ben agrees 
must be present in AGI systems will have a significant impact on our 
efforts to build them.  But the only response to this danger at the 
moment is the bare statement made by people like Ben that "I do not 
think that the danger is significant".  No reason given, no explicit 
attack on any component of the argument I have given, only a statement 
of intuition, even though I have argued that intuition cannot in 
principle be a trustworthy guide here.

I see this as a head-in-the-sand response.

There, I wasted too much time on this again.  The only virtue of writing 
such a long post is that no one will read all of it, so there won't be 
many replies and I can therefore get back to real work.



Richard Loosemore


> So the system would need regulation.
> 
> Most of my system operates on a message passing system with little concern
> for synchronization, it does not require low latencies, most of its units,
> operate under fairly similar code.  But hopefully when you get it all
> working together it will be fairly dynamic, but that dynamism with be
under
> multiple controls.
> 
> I think we are going to have to get such systems up and running to find
you
> just how hard or easy they will be to control, which I acknowledged in my
> email to Richard.  I think that once we do we will be in a much better
> position to think about what is needed to control them.  I believe such
> control will be one of the major intellectual challenges to getting AGI to
> function at a human-level.  This issue is not only preventing runaway
> conditions, it is optimizing the intelligence of the inferencing, which I
> think will be even more import and diffiducle.  (There are all sorts of
> damping mechanisms and selective biasing mechanism that should be able to
> prevent many types of chaotic behaviors.)  But I am quite confident with
> multiple teams working on it, these control problems could be largely
> overcome in several years, with the systems themselves doing most of the
> learning.
> 
> Even a little OpenCog AGI on a PC, could be interesting first indication
of
> the extent to which complexity will present control problems.  As I said
if
> you had 3G of ram for representation, that should allow about 50 million
> atoms.  Over time you would probably end up with at least hundreds of
> thousand of complex patterns, and it would be interesting to see how easy
it
> would be to properly control them, and get them to work together as a
> properly functioning thought economy in what ever small interactive world
> they developed their self-organizing pattern base.  Of course on such a PC
> based system you would only, on average, be able to do about 10million
> pattern to pattern activations a second, so you would be talking about a
> fairly trivial system, but with say 100K patterns, it would be a good
first
> indication of how easy or hard agi systems will be to control.
> 
> Ed Porter


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73389875-1ce0bb

<<attachment: winmail.dat>>

Reply via email to