[agi] A New Kind of Science

2003-01-11 Thread Ben Goertzel

Ed,

Your comments on A New Kind of Science are interesting...

 And the reference to 'a new kind of science' is, in fact, to Stephan
 Wolfram's most recent 'opus mangus' of over 1000 pages by the same name A
 New Kind of Science.

Some of you may have seen my review of this book, which appeared in the
June issue of the Extropy magazine:

http://www.extropy.org/ideas/journal/current/2002-06-01.html
 
A terrifying number of reviews of the book are collected here:

www.math.usf.edu/~eclark/ANKOS_reviews.html

 The thoughts and findings from the book seem rather startling for an 'AGI
 scientist' given 'a new kind of science'.  These results are captured in
 Wolfram's  Principle of Computational Equivalence paraphrased as:
 1. All the systems in nature follow computable rules. (strong AI)
 2. All systems that reach the fundamental upper bound to their complexity,
 namely Turing's halting problem, are equivalent.
 3. Almost all systems that are not obviously weak reach the bound and are
 thus equivalent to the halting problem.

Right.  So the main claim is that nearly all complex systems are
implicitly universal computers...
 
And my answer is: Probably ... but so what?  Different universal
computers behave totally differently in terms of what they can compute
within fixed space and time resource bounds.  And real-world
intelligence is all about what can be computed within fixed space and
time resource bounds.

Given unbounded space and time resource bounds, AI is a trivial
problem.  Many have stated this informally (as I did in '93 in my book
The Structure of Intelligence); Solomonoff proved it one way in his
classic work on algorithmic information theory, and Marcus Hutter proved
it even more directly

Since his Principle of Computational Universality does not speak about
average-case space and time complexity of various computations using
various complex systems, it is essentially vacuous from the point of
view of AGI.

 Wolfram's  Principle of Computational Equivalence suggest that theoretical
 approaches, and perhaps even experimental approaches, to science vis-a-vis
 attempts to formulate science in terms of traditional mathematics falls
 short of capturing all the richness of the complex world.  What is needed is
 'a new kind of science'.  And that 'a new kind of science' can be achieved
 through the use of algorithmic models and experimentation the likes of which
 he studies.

What we need for AGI is a pragmatic understanding of the dynamical
behavior of certain types of systems (AGI systems) in certain types of
environments.  This type of understanding is not ruled out by Wolfram's
Principle, fortunately... we are not seeking a completely general
understanding of all complex systems, which IS ruled out by algorithmic
information theory (which shows that, given the finite size of our
brains, we can't understand systems of greater algorithmic information
than our brains).

Whether we can achieve the needed understanding via mathematical
theorem-proving is not yet clear.  It hasn't been achieved yet via ANY
mechanism -- experimental, mathematical, or divine-inspiration ;_)
 
I share some of Wolfram's skepticism regarding theoretical math's
ability to deal with very complex systems like AGI's.  And yet on long
airplane flights I find myself doodling equations in a notebook, trying
to come up with the novel math theory that will allow us to prove such
theorems after all

 If you take Steve's A New Kind of Science at face value...and I believe
 Steve is well worth considering since he is a very serious, intelligent
 scientist ..., you are left with some rather startling implications for an
 'AGI scientist' that, at the most fundamental level, is build en silico and
 cognates digitally through algorithms.
 
 ...AGI design...hmm, I wonder what Steve is up to these days?

The sections on AI and cognition in Wolfram's book are among the
weakest, sketchiest, least plausible ones.  He clearly spent 50 times as
much effort on the portions dealing with his speculative physics
theories.  The odds that he's seriously working on anything related to
AGI are very small, I feel.

I agree that building an AGI and learning about its dynamics through
experimentation is a valid course.  It's what I'm doing!  But I'm not
ready to dismiss the possibility of fundamental math progress as readily
as Wolfram is.

A working AGI would be a huge advance over current  AI systems.  A
useful math theory of complex systems would be a huge advance over
current math.  I am more confident in the former breakthrough than the
latter, but consider both to be real possibilities...

My general idea about what a math theory of complex systems is the idea
of a theory of patterns, as I've sketched very loosely in some prior
publications.  But I have not proved any deep theorems about the theory
of patterns ... it's hard.  A breakthrough is needed... maybe Wolfram is
right and it will never come... I dunno..

On the other hand, Wolfram 

Re: [agi] A New Kind of Science

2003-01-11 Thread Ben Goertzel
At 

www.santafe.edu/~shalizi/notebooks/ cellular-automata.html

Wolfram's book is reviewed as a rare blend of monster raving egomania
and utter batshit insanity ... (a phrase I would like to have
emblazoned on my gravestone, except that I don't plan on dying, and if I
do die I plan on being frozen rather than buried) 

The context is:


* Dis-recommended:
 Stephen Wolfram, A New Kind of Science [This is almost, but not quite,
a case for the immortal ``What is true is not new, and what is new is
not true''. The one new, true thing is a proof that the elementary CA
rule 110 can support universal, Turing-complete computation. (One of
Wolfram's earlier books states that such a thing is obviously
impossible.) This however was shown not by Wolfram but by Matthew Cook
(this is the ``technical content and proofs'' for which Wolfram
acknowledges Cook, in six point type, in his frontmatter). In any case
it cannot bear the weight Wolfram places on it. Watch This Space for a
detailed critique of this book, a rare blend of monster raving egomania
and utter batshit insanity.] 


-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]