On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > I find your argument quotidian and lacking in depth. ...

> What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
> of complete technical ignorance, patronizing insults and breathtaking 
> arrogance. ...

I find this argument lacking in depth, as well.

Actually, much of your paper is right. What I said was that I've heard it all 
before (that's what "quotidian" means) and others have taken it farther than 
you have.

You write (proceedings p. 161) "The term 'complex system' is used to describe 
PRECISELY those cases where the global behavior of the system shows 
interesting regularities, and is not completely random, but where the nature 
of the interaction of the components is such that we would normally expect 
the consequences of those interactions to be beyond the reach of analytic 
solutions." (emphasis added)

But of course even a 6th-degree polynomial is beyond the reach of an analytic 
solution, as is the 3-body problem in Newtonian mechanics. And indeed the 
orbit of Pluto has been shown to be chaotic. But we can still predict with 
great confidence when something as finicky as a solar eclipse will occur 
thousands of years from now. So being beyond analytic solution does not mean 
unpredictable in many, indeed most, practical cases.

We've spent five centuries learning how to characterize, find the regularities 
in, and make predictions about, systems that are in your precise definition, 
complex. We call this science. It is not about analytic solutions, though 
those are always nice, but about testable hypotheses in whatever form stated. 
Nowadays, these often come in the form of computational models.

You then talk about the "Global-Local Disconnect" as if that were some gulf 
unbridgeable in principle the instant we find a system is complex. But that 
contradicts the fact that science works -- we can understand a world of 
bouncing molecules and sticky atoms in terms of pressure and flammability. 
Science has produced a large number of levels of explanation, many of them 
causally related, and will continue doing so. But there is not (and never 
will be) any overall closed-form analytic solution.

The physical world is, in your and Wolfram's words, "computationally 
irreducible". But computational irreducibility is a johnny-come-lately 
retread of a very key idea, Gödel incompleteness, that forms the basis of 
much of 20th-century mathematics, including computer science. It is PROVABLE 
that any system that is computationally universal cannot be predicted, in 
general, except by simulating its computation. This was known well before 
Wolfram came along. He didn't say diddley-squat that was new in ANKoS.

So, any system that is computationally universal, i.e. Turing-complete, i.e. 
capable of modelling a Universal Turing machine, or a Post production system, 
or the lambda calculus, or partial recursive functions, is PROVABLY immune to 
analytic solution. And yet, guess what? we have computer *science*, which has 
found many regularities and predictabilities, much as physics has found 
things like Lagrange points that are stable solutions to special cases of the 
3-body problem.

One common poster child of "complex systems" has been the fractal beauty of 
the Mandelbrot set, seemingly generating endless complexity from a simple 
formula. Well duh -- it's a recursive function.

I find it very odd that you spend more than a page on Conway's Life, talking 
about ways to characterize it beyond the "generative capacity" -- and yet you 
never mention that Life is Turing-complete. It certainly isn't obvious; it 
was an open question for a couple of decades, I think; but it has been shown 
able to model a Universal Turing machine. Once that was proven, there were 
suddenly a LOT of things known about Life that weren't known before. 

Your next point, which you call Hidden Complexity, is very much like the 
phenomenon I call Formalist Float (B.AI 89-101). Which means, oddly enough, 
that we've come to very much the same conclusion about what the problem with 
AI to date has been -- except that I don't buy your GLD at all, except 
inasmuch as it says that science is hard.

Okay, so much for science. On to engineering, or how to build an AGI. You 
point out that connectionism, for example, has tended to study mathematically 
tractable systems, leading them to miss a key capability. But that's exactly 
to be expected if they build systems that are not computationally universal, 
incapable of self-reference and recursion -- and that has been said long and 
loud in the AI community since Minsky published Perceptrons, even before the 
connectionist resurgence in the 80's.

You propose to take core chunks of complex system and test them empirically, 
finding scientific characterizations of their behavior that could be used in 
a larger system. Great! This is just what Hugo de Garis has been saying, in 
detail and for years and years, with his schemes for evolving Hopfield nets.

One final point, about the difference between science and engineering: To 
bridge the GLD, you can form a patchwork of theories that in time comes to 
cover the phenomena reasonably well -- that's science. Or you can take 
specific parts of the landscape that you do understand well and use them to 
build a working machine, and don't care what happens elsewhere in the space 
because you're not operating there. That's engineering. Scruffy AI was 
science -- build lots of different stuff and see what happens, forming new 
theories and gaining new knowledge. Neat AI is engineering -- work in 
well-characterized, mathematically tractable parts of the space. 

Both are necessary. Let's get on with the job. It might save some time not to 
reinvent all the wheels just so that we can stick our own names on them, 
though.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49272416-06af00

Reply via email to