I criticised your original remarks because they demonstrated a complete lack of understanding of what complex systems actually are. You said things about complex systems that were, quite frankly, ridiculous: Turing-machine equivalence, for example, has nothing to do with this.

In your more lengthy criticism, below, you go on to make many more statements that are confused, and you omit key pieces of the puzzle that I went to great lengths to explain in my paper. In short, you misrepresent what I said and what others have said, and you show signs that you did not read the paper, but just skimmed it.

I will deal with your points one at a time.


J Storrs Hall, PhD wrote:
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
I find your argument quotidian and lacking in depth. ...

What you said above was pure, unalloyed bullshit: an exquisite cocktail of complete technical ignorance, patronizing insults and breathtaking arrogance. ...

I find this argument lacking in depth, as well.

Actually, much of your paper is right. What I said was that I've heard it all before (that's what "quotidian" means) and others have taken it farther than you have.

You write (proceedings p. 161) "The term 'complex system' is used to describe PRECISELY those cases where the global behavior of the system shows interesting regularities, and is not completely random, but where the nature of the interaction of the components is such that we would normally expect the consequences of those interactions to be beyond the reach of analytic solutions." (emphasis added)

But of course even a 6th-degree polynomial is beyond the reach of an analytic solution, as is the 3-body problem in Newtonian mechanics. And indeed the orbit of Pluto has been shown to be chaotic. But we can still predict with great confidence when something as finicky as a solar eclipse will occur thousands of years from now. So being beyond analytic solution does not mean unpredictable in many, indeed most, practical cases.

There are different degrees of complexity in systems: there is no black and white distinction between "pure complex systems" on the one hand and "non-complex" systems on the other.

I made this point in a number of ways in my paper, most especially by talking about the "degree of complexity" to be expected in intelligent systems, and whether or not they have a "significant amount" of complexity. At no point do I try to claim, or imply, that a system that possesses ANY degree of complexity is automatically banged over into the same category as the most extreme complex systems. In fact, I explicitly deny it:

    "One of the main arguments advanced in this paper is
     that complexity can be present in AI systems in a subtle way.
     This is in contrast to the widespread notion that the opposite
     is true: that those advocating the idea that intelligence involves
     complexity are trying to assert that intelligent behavior should
     be a floridly emergent property of systems in which there is no
     relationship whatsoever between the system components and the
     overall behavior.
     While there may be some who advocate such an extreme-emergence
     agenda, that is certainly not what is proposed here. It is
     simply not true, in general, that complexity needs to make
     itself felt in a dramatic way. Specifically, what is claimed
     here is that complexity can be quiet and unobtrusive, while
     at the same time having a significant impact on the overall
     behavior of an intelligent system."


In your criticism, you misrepresent my argument as a claim that IF any system has the smallest amount of complexity in its makeup, THEN it should be as totally unpredictable as the most extreme form of complex system. I will show, below, how you make this misrepresentation again and again.

First, you talk about 6th-degree polynomials. These are not "systems" in any meaningful sense of the word, they are functions. This is actually just a red herring.

Second, You mention the 3-body problem in Newtonian mechanics. Although I did not use it as such in the paper, this is my poster child of a partial complex system. I often cite the case of planetary system dynamics as an example of a real physical system that is PARTIALLY complex, because it is mostly governed by regular dynamics (which lets us predict solar eclipse precisely), but also has various minor aspects that are complex, such as Pluto's orbit, braiding effects in planetary rings, and so on.

This fits my definition of complexity (which you quote above) perfectly: there do exist "interesting regularities" in the global behavior of orbiting bodies (e.g. the presence of ring systems, and the presence of braiding effects in those rings systems) that appear to be beyond the reach of analytic explanation.

But you cite this as an example of something that contradicts my argument: it could only contradict it if I was arguing that the smallest amount of complexity should give rise to completely unpredictable behavior in the whole system. This is a distortion of my argument.

You then go on to make an unjustified generalization: "So being beyond analytic solution does not mean unpredictable in many, indeed most, practical cases."

Once again, I never said that a small dose of complexity would mean that the entire system is unpredictable.

We've spent five centuries learning how to characterize, find the regularities in, and make predictions about, systems that are in your precise definition, complex. We call this science. It is not about analytic solutions, though those are always nice, but about testable hypotheses in whatever form stated. Nowadays, these often come in the form of computational models.

This is the same distortion of my argument: those five centuries of science have been overwhelmingly devoted to systems in which the degree of complexity is small. My "precise definition" of complexity covers all of these cases: I would expect all of them to show residual evidence of unpredictability in their global behavior. And they do.

So once again my definition of complexity is perfectly consistent -- so long as it is not distorted to mean "all systems with any degree of complexity must show massive amounts of global unpredictability".

You then talk about the "Global-Local Disconnect" as if that were some gulf unbridgeable in principle the instant we find a system is complex. But that contradicts the fact that science works -- we can understand a world of bouncing molecules and sticky atoms in terms of pressure and flammability. Science has produced a large number of levels of explanation, many of them causally related, and will continue doing so. But there is not (and never will be) any overall closed-form analytic solution.

Once again, the same distortion. You accuse me of claiming that there is "some gulf unbridgeable in principle the instant we find a system is complex" when in fact I do nothing of the sort. I deliberately and clearly reject that idea.


The physical world is, in your and Wolfram's words, "computationally irreducible". But computational irreducibility is a johnny-come-lately retread of a very key idea, Gödel incompleteness, that forms the basis of much of 20th-century mathematics, including computer science. It is PROVABLE that any system that is computationally universal cannot be predicted, in general, except by simulating its computation. This was known well before Wolfram came along. He didn't say diddley-squat that was new in ANKoS.

This shows that you paid no attention to the actual argument in either Wolfram's book or my paper: Gödel incompleteness is a completely separate concept, having only a family resemblance to the concept that I (and Wolfram) have discussed.

Gödel incompleteness is about *provability* of theorems within formal mathematical systems. My argument is about finding SCIENTIFIC EXPLANATIONS of empirically observable phenomena.

Nobody in their right mind would claim that there is an exact equivalence between the question of whether a scientific explanation exists for a particular system, and the question of whether a theorem can be proved within a formal system. Scientists do not "prove" their explanations about things in the world .... heck, do I really have to labor this point any more? This kind of confusion is too ridiculous for words.

This is the reason why your original remarks deserved to be called 'bullshit': this kind of confusion would be forgivable in an undergraduate essay, and would have been forgivable in our debate here, except that it was used as a weapon in a contemptuous, sweeping dismissal of my argument.


So, any system that is computationally universal, i.e. Turing-complete, i.e. capable of modelling a Universal Turing machine, or a Post production system, or the lambda calculus, or partial recursive functions, is PROVABLY immune to analytic solution. And yet, guess what? we have computer *science*, which has found many regularities and predictabilities, much as physics has found things like Lagrange points that are stable solutions to special cases of the 3-body problem.

Completely irrelevant.

I talked about scientific explanations, and whether systems could be explained by showing how the global behavior results from the particular local mechanisms that drive the system.

According to my actual argument (and not the distortion of it that you are pushing), computer systems CAN easily be made to have predictable global behavior because you can write a program with no complexity in it, and then, BOOM! the overall behavior is a nice, mechanism-like consequence of the code in the program.

Or, you can program the computer to have tangled interactions down at the local level.... and then (well, what a surprise?!) the global behavior becomes complex.

How does the fact of the Turing-completeness relate to these two facts? It doesn't. Your entire paragraph above has diddly-squat to do with it.



One common poster child of "complex systems" has been the fractal beauty of the Mandelbrot set, seemingly generating endless complexity from a simple formula. Well duh -- it's a recursive function.

So what?

I find it very odd that you spend more than a page on Conway's Life, talking about ways to characterize it beyond the "generative capacity" -- and yet you never mention that Life is Turing-complete. It certainly isn't obvious; it was an open question for a couple of decades, I think; but it has been shown able to model a Universal Turing machine. Once that was proven, there were suddenly a LOT of things known about Life that weren't known before.

More complete irrelevance.

Failing to understand the actual point of the paper I wrote, you continue to amuse yourself by talking about the Turing-completeness of Conway's Life.


Your next point, which you call Hidden Complexity, is very much like the phenomenon I call Formalist Float (B.AI 89-101). Which means, oddly enough, that we've come to very much the same conclusion about what the problem with AI to date has been -- except that I don't buy your GLD at all, except inasmuch as it says that science is hard.
>
Okay, so much for science. On to engineering, or how to build an AGI. You point out that connectionism, for example, has tended to study mathematically tractable systems, leading them to miss a key capability. But that's exactly to be expected if they build systems that are not computationally universal, incapable of self-reference and recursion -- and that has been said long and loud in the AI community since Minsky published Perceptrons, even before the connectionist resurgence in the 80's.

What on earth are you talking about? You are not showing any evidence of understanding WHY I made my comment about the connectionist community, but have once again departed on a side track about computational universality. You say much, here, that I could get drawn into, but I am not going to because it simply has nothing to do with the content of my paper.


You propose to take core chunks of complex system and test them empirically, finding scientific characterizations of their behavior that could be used in a larger system. Great! This is just what Hugo de Garis has been saying, in detail and for years and years, with his schemes for evolving Hopfield nets.

This is a joke, right? You read my description of how to proceed, and all you understood was "You propose to take core chunks of complex system and test them empirically, finding scientific characterizations of their behavior that could be used in a larger system."?

I said nothing of the sort, and my proposal has not the slightest resemblance to what Hugo de Garis is doing.


One final point, about the difference between science and engineering: To bridge the GLD, you can form a patchwork of theories that in time comes to cover the phenomena reasonably well -- that's science. Or you can take specific parts of the landscape that you do understand well and use them to build a working machine, and don't care what happens elsewhere in the space because you're not operating there. That's engineering. Scruffy AI was science -- build lots of different stuff and see what happens, forming new theories and gaining new knowledge. Neat AI is engineering -- work in well-characterized, mathematically tractable parts of the space. Both are necessary. Let's get on with the job. It might save some time not to reinvent all the wheels just so that we can stick our own names on them, though.

Fascinating.

The whole point of the paper was to explain exactly why this chunk of text you just produced CANNOT be made to work.

But not once have you shown the slightest sign that you understood the argument I actually presented.

You have distorted it, parodied it, insulted it, and flown off on tangents that have nothing to do with it ... but at no point have you demonstrated an ability to paraphrase the argument well enough to show you understand what I was trying to say, let alone explained why you disagree.

If you can paraphrase someone else's claim accurately, and THEN go on to analyze it and show why it is wrong, then you demonstrate that you understand.

If you can't do that (and so far, you have not), then the only conclusion I can come to is that you simply never understood it.

Even that, I could respond positively to, but your opening salvo was a blistering insult. That doesn't sit well.



Richard Loosemore
















-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49339565-eaaa0b

Reply via email to