*I just want to jump in here and say I appreciate the content of this post as 
opposed to many of the posts of late which were just name calling and 
bickering... hope to see more content instead.*

Richard Loosemore <[EMAIL PROTECTED]> wrote: Ed Porter wrote:
> Jean-Paul,
> 
> Although complexity is one of the areas associated with AI where I have less
> knowledge than many on the list, I was aware of the general distinction you
> are making.  
> 
> What I was pointing out in my email to Richard Loosemore what that the
> definitions in his paper "Complex Systems, Artificial Intelligence and
> Theoretical Psychology," for "irreducible computability" and "global-local
> interconnect" themselves are not totally clear about this distinction, and
> as a result, when Richard says that those two issues are an unavoidable part
> of AGI design that must be much more deeply understood before AGI can
> advance, by the more loose definitions which would cover the types of
> complexity involved in large matrix calculations and the design of a massive
> supercomputer, of course those issues would arise in AGI design, but its no
> big deal because we have a long history of dealing with them.
> 
> But in my email to Richard I said I was assuming he was not using this more
> loose definitions of these words, because if he were, they would not present
> the unexpected difficulties of the type he has been predicting.  I said I
> though he was dealing with more the potentially unruly type of complexity, I
> assume you were talking about.
> 
> I am aware of that type of complexity being a potential problem, but I have
> designed my system to hopefully control it.  A modern-day well functioning
> economy is complex (people at the Santa Fe Institute often cite economies as
> examples of complex systems), but it is often amazingly unchaotic
> considering how loosely it is organized and how many individual entities it
> has in it, and how many transitions it is constantly undergoing.  Unsually,
> unless something bangs on it hard (such as having the price of a major
> commodity all of a sudden triple), it has a fair amount of stability, while
> constantly creating new winners and losers (which is a productive form of
> mini-chaos).  Of course in the absence of regulation it is naturally prone
> to boom and bust cycles.  

Ed,

I now understand that you have indeed heard of complex systems before, 
but I must insist that in your summary above you have summarized what 
they are in such a way that completely contradicts what they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

I am struggling here, Ed.  I want to go on to explain exactly what I 
mean (and what complex systems theorists mean) but I cannot see a way to 
do it without writing half a book this afternoon.

Okay, let me try this.

Imagine that we got a bunch of computers and connected them with a 
network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program:  it keeps a 
handful of local parameters (U, V, W, X, Y) and it updates the values of 
its own parameters according to what the neighboring machines are doing 
with their parameters.

How does it do the updating?  Well, imagine some really messy and 
bizarre algorithm that involves looking at the neighbors' values, then 
using them to cross reference each other, and introduce delays and 
gradients and stuff.

On the face of it, you might think that the result will be that the U V 
W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just 
random mush, there are some (a noticeably large number in fact) that 
have overall behavior that shows 'regularities'.  For example, much to 
our surprise we might see waves in the U values.  And every time two 
waves hit each other, a vortex is created for exactly 20 minutes, then 
it stops.  I am making this up, but that is the kind of thing that could 
happen.

2) The algorithm is so messy that we cannot do any math to analyse and 
predict the behavior of the system.  All we can do is say that we have 
absolutely no techniques that will allow us to mathematical progress on 
the problem today, and we do not know if at ANY time in future history 
there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be 
"explained" in the normal way.  We see them happening, but we do not 
know why they do.  The bizzare algorithm is the "low level mechanism" 
and the waves and vortices are the "high level behavior", and when I say 
there is a "Global-Local Disconnect" in this system, all I mean is that 
we are completely stuck when it comes to explaining the high level in 
terms of the low level.

Believe me, it is childishly easy to write down equations/algorithms for 
a system like this that are so profoundly intractable that no 
mathematician would even think of touching them.  You have to trust me 
on this.  Call your local Math department at Harvard or somewhere, and 
check with them if you like.

As soon as the equations involve funky little dependencies such as

"Pick two neighbors at random, then pick two parameters at random from 
each of these, and for the next day try to make one of my parameters 
(chosen at random, again) follow the average of those two as they were 
exactly 20 minutes ago, EXCEPT when neighbors 5 and 7 both show the same 
value of the V parameter, in which case drop this algorithm for the rest 
of the day and instead follow the substitute algorithm B...."

Now, this set of computers would be a wicked example of a complex 
system, even while the biggest supercomputer in the world, following a 
nice, well behaved algorithm, would not be complex at all.

The summary of this is as follows:  there are some systems in which the 
interaction of the components are such that we must effectively declare 
that NO THEORY exists that would enable us to predict certain global 
regularities observed in these systems.

In the real world, systems are mixtures of complex and not-complex, of 
course, so don't think that I am saying that cognitive systems are 
completely complex:  I do not say that.

But when a system involves a *significant* amount of complexity, what do 
we do?

More to the point, what do we do when we have good reasons for 
suspecting that some CRUCIAL aspects of a system are going to introduce 
complexity, but we cannot be sure how much of an impact that complexity 
is going to have on the system?

THIS IS THE MOST IMPORTANT THING ABOUT MY ARGUMENT (which is why I am 
putting it so loudly ;-)).  We cannot know exactly how much of an impact 
complexity will have, because we have no way to measure the 'amount" of 
complexity, nor any way to say how much impact we get from a given 
amount of complexity!  So we are in the dark.

Do we suspect that complexity is involved in intelligence.  I could 
present lots of reasoning here, but instead I will resort to quoting 
your favorite AGI researcher.  Ben Goertzel, in a message just a short 
while ago, said

"There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems"

Can I take it as understood that this is accepted, and move on?

So, yes, there is evidence that complexity is involved.

My argument is that when you examine the way that complexity has an 
effect on systems, you find that it can have very quiet, subtle effects 
that do not jump right out at you and say "HERE I AM!", but they just 
lurk in the background and make it quietly impossible for you to get the 
system up above a certain level of functioning.  To be more specific: 
when you really allow the symbol-building mechanisms, and the learning 
mechanisms, and the inference-control mechanisms to do their thing in a 
full scale system, the effects of tiny bits of complexity in the 
underlying design CAN have a huge impact.  One particular design choice, 
for example, could mean the difference between a system that looks like 
it ought to work, but when you set it running autonomously it gradually 
drifts into imbecility without there being any clear reason.

Now I want to qualify that last paragraph in a very important way:  I 
cannot say anything as strong as "complexity WILL have bad effects", I 
can only say that "complexity has the potential to have these effects".

This means that we have a situation in which there is a DANGER that 
complexity will stop us from being able to diagnose why our systems are 
not working, but we cannot quantify or analyse  that danger:  it is a 
great big unknown.

But the fact that it is such an unknown has one big consequence:  if 
someone says that "I have a gut instinct that complexity will not turn 
out to be a problem" then that statement is based on thing but 
guesswork.  If the intution were based on something that the person 
knows, then that means the person "knows" something that is enabling 
them to predict the behavior of a complex system, or understand the 
amount of complexity involved.  That would be bizarre:  how can someone 
know that how much impact the complexity is going to have, when in the 
same breath they will admit that NOBODY currently understands just how 
much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is 
a small amount of complexity and say:  "Well, these folks managed to 
understand their systems without getting worried about complexity, so 
why don't we assume that our problem is no worse than theirs?"  For 
example, someone could point to the dynamics of planetary systems and 
say that there is a small bit of complexity there, but it is a 
relatively small effect in the grand scheme of things.

Problem with that line of argument is that there are NO other examples 
of an engineering system with as much naked funkiness in the 
interactions between the low level components.  It is as simple as that. 
  Planets simply don't cut it:  the funky interactions there are 
ridiculously small compared with what we know exists in intelligence.

Try to think of some other example where we have tried to build a system 
that behaves in a certain overall way, but we started out by using 
components that interacted in a completely funky way, and we succeeded 
in getting the thing working in the way we set out to.  In all the 
history of engineering there has never been such a thing.

Conclusion:  there is a danger that the complexity that even Ben agrees 
must be present in AGI systems will have a significant impact on our 
efforts to build them.  But the only response to this danger at the 
moment is the bare statement made by people like Ben that "I do not 
think that the danger is significant".  No reason given, no explicit 
attack on any component of the argument I have given, only a statement 
of intuition, even though I have argued that intuition cannot in 
principle be a trustworthy guide here.

I see this as a head-in-the-sand response.

There, I wasted too much time on this again.  The only virtue of writing 
such a long post is that no one will read all of it, so there won't be 
many replies and I can therefore get back to real work.



Richard Loosemore


> So the system would need regulation.
> 
> Most of my system operates on a message passing system with little concern
> for synchronization, it does not require low latencies, most of its units,
> operate under fairly similar code.  But hopefully when you get it all
> working together it will be fairly dynamic, but that dynamism with be under
> multiple controls.
> 
> I think we are going to have to get such systems up and running to find you
> just how hard or easy they will be to control, which I acknowledged in my
> email to Richard.  I think that once we do we will be in a much better
> position to think about what is needed to control them.  I believe such
> control will be one of the major intellectual challenges to getting AGI to
> function at a human-level.  This issue is not only preventing runaway
> conditions, it is optimizing the intelligence of the inferencing, which I
> think will be even more import and diffiducle.  (There are all sorts of
> damping mechanisms and selective biasing mechanism that should be able to
> prevent many types of chaotic behaviors.)  But I am quite confident with
> multiple teams working on it, these control problems could be largely
> overcome in several years, with the systems themselves doing most of the
> learning.
> 
> Even a little OpenCog AGI on a PC, could be interesting first indication of
> the extent to which complexity will present control problems.  As I said if
> you had 3G of ram for representation, that should allow about 50 million
> atoms.  Over time you would probably end up with at least hundreds of
> thousand of complex patterns, and it would be interesting to see how easy it
> would be to properly control them, and get them to work together as a
> properly functioning thought economy in what ever small interactive world
> they developed their self-organizing pattern base.  Of course on such a PC
> based system you would only, on average, be able to do about 10million
> pattern to pattern activations a second, so you would be talking about a
> fairly trivial system, but with say 100K patterns, it would be a good first
> indication of how easy or hard agi systems will be to control.
> 
> Ed Porter


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74580886-b5b577

Reply via email to