Richard,

I read your core definitions of "computationally irreducabile" and
"global-local disconnect" and by themselves they really don't distinguish
very well between complicated and "complex".

But I did assume from your paper and other writings you meant "complex"
although your core definitions are not very clear about the distinction.

Ed Porter

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 10:31 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:
> Richard, 
> 
>       I quickly reviewed your paper, and you will be happy to note that I
> had underlined and highlighted it so such skimming was more valuable that
it
> otherwise would have been.
> 
>       With regard to "COMPUTATIONAL IRREDUCIBILITY", I guess a lot depends
> on definition. 
> 
>       Yes, my vision of a human AGI would be a very complex machine.  Yes,
> a lot of its outputs could only be made with human level reasonableness
> after a very large amount of computation.  I know of no shortcuts around
the
> need to do such complex computation.  So it arguably falls in to what you
> say Wolfram calls "computational irreducibility."  
> 
>       But the same could be said for any of many types of computations,
> such as large matrix equations or Google's map-reduces, which are
routinely
> performed on supercomputers.
> 
>       So if that is how you define irreducibility, its not that big a
> deal.  It just means you have to do a lot of computing to get an answer,
> which I have assumed all along for AGI (Remember I am the one pushing for
> breaking the small hardware mindset.)  But it doesn't mean we don't know
how
> to do such computing or that we have to do a lot more complexity research,
> of the type suggested in your paper, before we can successfully designing
> AGIs.
> 
>       With regard to "GLOBAL-LOCAL DISCONNECT", again it depends what you
> mean.  
> 
>       You define it as
> 
>               "The GLD merely signifies that it might be difficult or
> impossible to derive analytic explanations of global regularities that we
> observe in the system, given only a knowledge of the local rules that
drive
> the system. "
> 
>       I don't know what this means.  Even the game of Life referred to in
> your paper can be analytically explained.  It is just that some of the
> things that happen are rather complex and would take a lot of computing to
> analyze.  So does the global-local disconnect apply to anything where an
> explanation requires a lot of analysis?  If that is the case than any
large
> computation, of the type which mankind does and designs every day, would
> have a global-local disconnect.
> 
>       If that is the case, the global-local disconnect is no big deal.  We
> deal with it every day.

Forgive, but I am going to have to interrupt at this point.

Ed, what is going on here is that my paper is about "complex systems" 
but you are taking that phrase to mean something like "complicated 
systems" rather than the real meaning -- the real meaning is very much 
not "complicated systems", it has to do with a particular class of 
systems that are labelled "complex" BECAUSE they show overall behavior 
that appears to be disconnected from the mechanisms out of which the 
systems are made up.

The problem is that "complex systems" has a specific technical meaning. 
  If you look at the footnote in my paper (I think it is on page one), 
you will find that the very first time I use the word "complex" I make 
sure that my audience does not take it the wrong way by explaining that 
it does not refer to "complicated system".

Everything you are saying here in this post is missing the point, so 
could I request that you do some digging around to figure out what 
complex systems are, and then make a second attempt?  I am sorry:  I do 
not have the time to write a long introductory essay on complex systems 
right now.

Without this understanding, the whole of my paper will seem like 
gobbledegook.  I am afraid this is the result of skimming through the 
paper.  I am sure you would have noticed the problem if you had gone 
more slowly.



Richard Loosemore.


>       I don't know exactly what you mean by "regularities" in the above
> definition, but I think you mean something equivalent to patterns or
> meaningful generalizations.  In many types of computing commonly done, you
> don't know what the regularizes will be without tremendous computing.  For
> example in principal component analysis, you often don't know what the
major
> dimensions of a distribution will be until you do a tremendous amount of
> computation.  Does that mean there is a GLD in that problem?  If so, it
> doesn't seem to be a big deal.  PCA is done all the time, as are all sorts
> of other complex matrix computations.
> 
>       But you have implied multiple times that you think the global-local
> disconnect is a big, big deal.  You have implied multiple times it
presents
> a major problem to developing AGI.  If I interpret your prior statements
> taken in conjunction with your paper correctly, I am guessing your major
> thrust is that it will be very difficult to design AGI's where the desired
> behavior is to be the result of many casual relations between a vast
number
> of active elements, because in such system the causality is so non-linear
> and complex that we cannot currently properly think and design in terms of
> them.  
> 
>       Although this proposition is not obviously true on its face, it is
> arguably also not obviously false on its face.
> 
>       Although it is easy to design system where the systems behavior
> would be sufficiently chaotic that such design would be impossible, it
seems
> likely that it is also possible to design complex system in which the
> behavior is not so chaotic or unpredictable.  Take the internet.
Something
> like 10^8 computers talk to each other, and in general it works as
designed.
> Take IBM's supercomputer BlueGene L, 64K dual core processor computer each
> with at least 256MBytes all capable of receiving and passing messages at
> 4Ghz on each of over 3 dimensions, and capable of performing 100's of
> trillions of FLOP/sec.  Such a system probably contains at least 10^14
> non-linear separately functional elements, and yet it works as designed.
If
> there is a global-local disconnect in the BlueGene L, which there could be
> depending on your definition, it is not a problem for most of the
> computation it does.
> 
>       So why are we to believe, as your paper seems to suggest, that we
> have to do some scan of complexity space before we can design AGI systems?
> 
>       In the AGI I am thinking of one would be able to predict many of the
> behaviors of the machine, at least at a general level from local rules,
> because the system has been designed to produce certain types of results
in
> certain types of situations.  Of course, because the system is large the
> inferencing from each of the many local rules would require a hell of a
lot
> of computing, so much computing that a human could not in a human lifetime
> understand everything it was doing in a relatively short period of time.  
> 
>       But because the system is an machine whose behavior is largely
> dominated by sensed experience, and by what behaviors and representations
> have proven themselves to be useful in that experience, and because the
> system has control mechanism, such as markets and currency control
> mechanisms, for modulating the general level of activity and
discriminating
> against unproductive behaviors and parameter settings -- the chance of
more
> than a small, and often beneficial, amount of chaotic behavior is greatly
> reduced.  (But until we actually start running such systems we will not
know
> for sure.)
> 
>       It seems to me (perhaps mistakenly) you have been saying, that the
> the global-local disconnect is some great dark chasm which has to be
> extensively explored before we humans can dare begin to seek to design
> complex AGI's.
> 
>       I have seen no evidence for that.  It seems to me that chaotic
> behavior is, to a lesser degree, like combinatorial explosion.  It is a
> problem we should always keep in mind, which limits some of the things we
> can do, but which in general we know how to avoid.  More knowledge about
it
> might be helpful, but it is not clear at this point how much it is needed,
> and, if it were needed, which particular aspects of it would be needed.
> 
>       Your paper says 
> 
>               "We sometimes talk of the basic units of knowledge-concepts
> or symbols- as if they have little or no internal structure, and as if
they
> exist at the base level of description of our system. This could be wrong:
> we could be looking at the equivalent of the second level in the Life
> automaton, therefore seeing nothing more than an approximation of how the
> real system works."  
> 
>       I don't see why the atoms (nodes and links) of an AGI cannot be
> represented as relatively straight forward digital representations, such
as
> a struc or object class.  A more complex NL-level concept (such as "Iraq"
to
> use Ben's common example) might involve hundreds of thousands or millions
of
> such nodes and links, but it seems to me there are ways to deal with such
> complexity in a relatively orderly, relatively computationally efficient
(by
> that I mean scalable, but not computational cheap), manner.  
> 
>       My approach would not involve anything as self-defeating as using a
> representation that has such a convoluted non-linear temporal causality as
> that in the Game of Life, as you quotation suggests.  I have designed my
> system to largely avoid the unruliness of complexity whenever possible. 
> 
>       Take Hecht-Neilsen's confabulation.  It uses millions of inferences
> for each of the multiple words and phrases its selects when it generates
an
> NL sentense.  But unless his papers are dishonest, it does them on an
> overall manner that is amazingly orderly, despite the underlying
complexity.
> 
> 
>       Would such computation be "irreducibly complex"?  Very arguably by
> the Wolfram definition, it would be.  Would there be a "global-local
> disconnect"?   It depends on the definition.  The conceptual model of how
> the system works is relatively simple, but that actual
> inference-by-inference computation would be very difficult for a human to
> follow at a detailed level.  But what is clear is that such a system was
> built without having to first research the global-local disconnect in any
> great depth, as your have suggested is necessary.
> 
>       Similarly, although the computation in a Novamente type AGI
> architecture would be much more complex than in Hecht-Neilsen's
> confabulation, it would share certain important similarities.  And
although
> the complexity issues in appropriately controlling the inferencing a
> human-level Novamente-type machine will be challenging, it is far from
clear
> that such design will require substantial advances in the understanding of
> global-local interconnect.  
> 
>       I am confident that valuable (though far less than human-level)
> computation can be done in a Novamente type system with relatively simple
> control mechanisms.  So I think it is worth designing such Novamente-type
> systems and saving the fine tuning of the inference control system until
we
> have systems to tests such control systems on.  And I think it is best to
> save whatever study of complexity that may be needed to get such control
> systems to operate relatively optimally in a dynamic manner until we
> actually have initial such control systems up and running, so that we have
a
> better idea about what complexity issues we are really dealing with.  
> 
>       I think this make much more sense than spending a lot of time now
> exploring the -- it would seem to me -- extremely very large space of
> possible global-local disconnects.
> 
> Ed Porter
> 
> -----Original Message-----
> From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, December 05, 2007 10:41 AM
> To: agi@v2.listbox.com
> Subject: Re: [agi] None of you seem to be able ...
> 
> Ed Porter wrote:
>> RICHARD LOOSEMOORE====> There is a high prima facie *risk* that
> intelligence
>> involves a 
>> significant amount of irreducibility (some of the most crucial 
>> characteristics of a complete intelligence would, in any other system, 
>> cause the behavior to show a global-local disconnect),
>>
>>
>> ED PORTER=====> Richard, "prima facie" means obvious on its face.  The
> above
>> statement and those that followed it below may be obvious to you, but it
> is
>> not obvious to a lot of us, and at least I have not seen (perhaps because
> of
>> my own ignorance, but perhaps not) any evidence that it is obvious.
>> Apparently Ben also does not find your position to be obvious, and Ben is
> no
>> dummy.
>>
>> Richard, did you ever just consider that it might be "turtles all the way
>> down", and by that I mean experiential patterns, such as those that could
> be
>> represented by Novamente atoms (nodes and links) in a gen/comp hierarchy
>> "all the way down".  In such a system each level is quite naturally
> derived
>> from levels below it by learning from experience.  There is a lot of
> dynamic
>> activity, but much of it is quite orderly, like that in Hecht-Neilsen's
>> Confabulation.  There is no reason why there has to be a "GLOBAL-LOCAL
>> DISCONNECT" of the type you envision, i.e., one that is totally
impossible
>> to architect in terms of until one totally explores global-local
> disconnect
>> space (just think how large an exploration space that might be).
>>
>> So if you have prima facie evidence to support your claim (other than
your
>> paper which I read which does not meet that standard
> 
> Ed,
> 
> Could you please summarize for me what your understandig is of my claim 
> for the "prima facie" evidence (that I gave in that paper), and then, if 
> you would, please explain where you believe the claim goes wrong.
> 
> With that level of specificity, we can discuss it.
> 
> Many thanks,
> 
> 
> 
> Richard Loosemore
> 
> 
> 
> ), then present it.  If
>> you make me eat my words you will have taught me something sufficiently
>> valuable that I will relish the experience.
> 
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73224675-6ed295

Reply via email to