Ed,

Your comments on "A New Kind of Science" are interesting...

> And the reference to 'a new kind of science' is, in fact, to Stephan
> Wolfram's most recent 'opus mangus' of over 1000 pages by the same name "A
> New Kind of Science".

Some of you may have seen my review of this book, which appeared in the
June issue of the Extropy magazine:

http://www.extropy.org/ideas/journal/current/2002-06-01.html
 
A terrifying number of reviews of the book are collected here:

www.math.usf.edu/~eclark/ANKOS_reviews.html

> The thoughts and findings from the book seem rather startling for an 'AGI
> scientist' given 'a new kind of science'.  These results are captured in
> Wolfram's  Principle of Computational Equivalence paraphrased as:
> 1. All the systems in nature follow computable rules. (strong AI)
> 2. All systems that reach the fundamental upper bound to their complexity,
> namely Turing's halting problem, are equivalent.
> 3. Almost all systems that are not obviously weak reach the bound and are
> thus equivalent to the halting problem.

Right.  So the main claim is that nearly all complex systems are
implicitly universal computers...
 
And my answer is: Probably ... but so what?  Different universal
computers behave totally differently in terms of what they can compute
within fixed space and time resource bounds.  And real-world
intelligence is all about what can be computed within fixed space and
time resource bounds.

Given unbounded space and time resource bounds, AI is a trivial
problem.  Many have stated this informally (as I did in '93 in my book
The Structure of Intelligence); Solomonoff proved it one way in his
classic work on algorithmic information theory, and Marcus Hutter proved
it even more directly....

Since his Principle of Computational Universality does not speak about
average-case space and time complexity of various computations using
various complex systems, it is essentially vacuous from the point of
view of AGI.

> Wolfram's  Principle of Computational Equivalence suggest that theoretical
> approaches, and perhaps even experimental approaches, to science vis-a-vis
> attempts to formulate science in terms of traditional mathematics falls
> short of capturing all the richness of the complex world.  What is needed is
> 'a new kind of science'.  And that 'a new kind of science' can be achieved
> through the use of algorithmic models and experimentation the likes of which
> he studies.

What we need for AGI is a pragmatic understanding of the dynamical
behavior of certain types of systems (AGI systems) in certain types of
environments.  This type of understanding is not ruled out by Wolfram's
Principle, fortunately... we are not seeking a completely general
understanding of all complex systems, which IS ruled out by algorithmic
information theory (which shows that, given the finite size of our
brains, we can't understand systems of greater algorithmic information
than our brains).

Whether we can achieve the needed understanding via mathematical
theorem-proving is not yet clear.  It hasn't been achieved yet via ANY
mechanism -- experimental, mathematical, or divine-inspiration ;_)
 
I share some of Wolfram's skepticism regarding theoretical math's
ability to deal with very complex systems like AGI's.  And yet on long
airplane flights I find myself doodling equations in a notebook, trying
to come up with the novel math theory that will allow us to prove such
theorems after all....

> If you take Steve's "A New Kind of Science" at face value...and I believe
> Steve is well worth considering since he is a very serious, intelligent
> scientist ..., you are left with some rather startling implications for an
> 'AGI scientist' that, at the most fundamental level, is build en silico and
> cognates digitally through algorithms.
> 
> ...AGI design...hmm, I wonder what Steve is up to these days?

The sections on AI and cognition in Wolfram's book are among the
weakest, sketchiest, least plausible ones.  He clearly spent 50 times as
much effort on the portions dealing with his speculative physics
theories.  The odds that he's seriously working on anything related to
AGI are very small, I feel.

I agree that building an AGI and learning about its dynamics through
experimentation is a valid course.  It's what I'm doing!  But I'm not
ready to dismiss the possibility of fundamental math progress as readily
as Wolfram is.

A working AGI would be a huge advance over current  AI systems.  A
useful math theory of complex systems would be a huge advance over
current math.  I am more confident in the former breakthrough than the
latter, but consider both to be real possibilities...

My general idea about what a math theory of complex systems is the idea
of a "theory of patterns", as I've sketched very loosely in some prior
publications.  But I have not proved any deep theorems about the theory
of patterns ... it's hard.  A breakthrough is needed... maybe Wolfram is
right and it will never come... I dunno..

On the other hand, Wolfram seems to think that insights about AGI could
come from playing with simple CA-ish systems.  That I really doubt.  The
patterns that are simply computable with CA-ish systems are not the ones
that are simply computable with complex multi-component AGI systems. 
His confidence in the ability of simple CA-ish models to model any kind
of system, is tied in with his lack of respect for the space and time
resource bound issue.

-- Ben Goertzel



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to