RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ed Porter
Richard,

Despite your statement to the contrary --- despite your "FURY" --- I did get
your point.  Not everybody beside Richard Loosemore is stupid.  

I understand there have been people making bold promises in AI for over 40
years, and most of them have been based on a gross under estimation of the
problem.  For example, in 1969 Minsky was claiming with then current
minicomputers AI would surpass human intelligence within several years.  

But in 1970, after my year long special study my senior year at Harvard, in
which I read a long reading list Minsky gave me, I came to the conclusion
that Minsky projection seemed rediculous.  I believed human level thinking
required deep experiential knowledge (now called grounding) and that I
seriously doubted anybody could make a human level AI without hardware
capable of storing many terabytes of memory, and the ability to access
significant portions of that memory multiple times a second -- a level of
hardware that is still not available, and that only recently been
approximated at a cost of many tens of millions of dollars.

To date, I am unaware of anyone approaching AGI with the type of hardware
that I have felt for much of the last 38 years would be necessary for human
level AGI.  So don't accuse me of being one of those who has been shown to
have been making false AI promises, because the hardware my predictions have
been based on has never yet been available to AI researchers. 

Since 1970 I have thought that if multiple teams had really powerful
hardware of the type you can buy now for several million dollars (but which
would have cost 20 to 30 times as much just a decade ago), although this
hardware was not capable of human level performance, it would enable very
rapid progress in AI in just ten or twenty years.  

But that was before I became aware of the advances in brain science and AI
that have been made in the last decade or two, advances that have radically
improved and clarified my understanding of they type of computation
architectures needed for various mind functions.  

Now, we actually have good ideas how to address almost all of the known
functions of the mind that we would want an AGI to have. 

For people like Ben, Joscha Bach, Sam Adams, myself, and multiple others IT
IS NOT THAT WE --- as you claim --- "JUST HAVE THIS BELIEF THAT IT WILL
WORK."  --- We have much more.  WE HAVE REASONABLY GOOD EXPLANATIONS FOR HOW
TO PERFORM ALMOST ALL OF THE MENTAL FUNCTIONS OF THE HUMAN MIND THAT WE WANT
AGI'S TO HAVE.

It not as if these explanations are totally nailed down, at least in my
mind.  (They may be much better nailed down in Ben's, Joscha Bach's, and Sam
Adams's.)  But I have an idea at a high level how each of them could be made
to work.  This is relatively new, at least for me.  They are complex
multi-level arguments so they cannot be conveyed briefly.  Ben has probably
done a better job of putting his ideas in writing, and his recent post in
this thread promises that relatively shortly he will provide them in much
more detail.  

One example of some of the new reasons for confidence that we are learning
how to design AGI is shown in the amazing success of the Serre-Poggio system
descrived at
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf.
This paper show the tremendous advances that have been made in automatically
learning hierarchical memory, and the power such memory provides in machine
perceptions.  This is not belief.  This a significantly automatically
learned system that works amazingly well for the rapid feed forward part of
visual object recognition.  

Another reason for optimism is Hintons new work described in papers such as
"Modeling image patches with a directed hierarchy of Markov random fields"
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer.  In the past it would
have been virtually impossible to train a neural net with so many hidden
nodes, but Hinton's new method allows rapid largely automatic training of
such large networks, enabling in the example show, surprisingly good
handwritten numeral recognition.

Yet another example of the power of automatic learning is shown by
impressive success of Hecht-Nielsen confabulation system in generating a
second sentence that reasonably follows from first, as if it had been
written by a human intelligence, withoug any attempt to teach the rules of
grammar or any explicit semantic knowledge.  The system learns from text
corpora.

You may say this is narrow AI.  But it all has general applicability.  For
example, the type of hierarchical memory with max-pooling shown in Serre's
paper shows is an extremely powerful paradigm that addresses some of the
most difficult problems in AI, including robust non-literal matching.  Such
hierarchical memory 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Mike Tintner
Ed:Another reason for optimism is Hintons new work described in papers such 
as

"Modeling image patches with a directed hierarchy of Markov random fields"
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer

Comment from a pal on Hinton who was similarly recommended on slashdot:(I'm 
ignorant here):


"I also took a closer look at the Hinton stuff that the slashdot poster made 
reference to. To call this DBN stuff highly advanced over Hawkins is 
ridiculous. I looked at it already a couple of months ago. It took Hinton 
***17-years*** - by his own admission - to figure out how to build a 
connectionist net that could reliably identify variations of handwritten 
numbers 1-9. And it's gonna take him about a MILLION more years to do 
general AI with this approach. Gakk.
To me, the biggest problem with connectionist networks is all they ever 
solve are toy problems - and it's 20 years after connectionism become 
popular again."





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ed Porter
Mike,

None of the Hawkin's papers I have read have given any results as impressive
as the Hinton papers I cited.  If you know of some that have, please send me
references to the most impressive among them.

Hinton says he believe his system could scale efficiently to much larger
nets.  If that is true, a system having multiples of his modules would
appear possibly able to learn how to handle a good chunk of sensory
perception. 

Like, Ben I am not wed to a totally connectionist approach, but rather one
that has attributes of both connectionist and symbolic approaches.  I
personally like to think in terms of systems where I have some idea what
things represent, so I can think in terms of what I want them to do.

But still I am impressed with what Hinton has shown, particularly if it can
be made to scale well to much larger systems.  

Ed Porter

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Sunday, June 29, 2008 2:48 PM
To: agi@v2.listbox.com
Cc: [EMAIL PROTECTED]
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI

Ed:Another reason for optimism is Hintons new work described in papers such 
as
"Modeling image patches with a directed hierarchy of Markov random fields"
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer

Comment from a pal on Hinton who was similarly recommended on slashdot:(I'm 
ignorant here):

"I also took a closer look at the Hinton stuff that the slashdot poster made

reference to. To call this DBN stuff highly advanced over Hawkins is 
ridiculous. I looked at it already a couple of months ago. It took Hinton 
***17-years*** - by his own admission - to figure out how to build a 
connectionist net that could reliably identify variations of handwritten 
numbers 1-9. And it's gonna take him about a MILLION more years to do 
general AI with this approach. Gakk.
To me, the biggest problem with connectionist networks is all they ever 
solve are toy problems - and it's 20 years after connectionism become 
popular again."




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Derek Zahn
I agree that the hardware advances are inspirational, and it seems possible 
that just having huge hardware around could change the way people think and 
encourage new ideas.
 
But what I'm really looking forward to is somebody producing a very impressive 
"general intelligence" result that was just really annoying because it took 10 
days of computing instead of an hour.
 
Seems to me that all the known AGI researchers are in theory, design, or system 
building phases; I don't think any of them are CPU-bound at present -- and no 
fair pointing to Goedel Machines or AIXI either, which will ALWAYS be 
resource-starved :)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,


So long as the general response to the complex systems problem is not "This
could be a serious issue, let's put our heads together to investigate it",
but "My gut feeling is that this is just not going to be a problem", or
"Quit rocking the boat!", you can bet that nobody really wants to ask any
questions about whether the approaches are correct, they just want to be
left alone to get on with their approaches.


Both Ed Porter and myself have given serious thought to the "complex systems
problem" as you call it, and have discussed it with you at length.  I
also read the
only formal paper you sent me dealing with it (albeit somewhat
indirectly) and also
your various online discourses on the topic.

Ed and I don't agree with you on the topic, but not because of lack of thinking
or attention.

Your argument FOR the existence of a "complex systems problem" with Novamente
or OpenCog, is not any more rigorous than our argument AGAINST it.


Oh, mere rhetoric.

You have never given an argument "against" it.  If you believe this is 
not correct, perhaps you could jog my memory by giving a brief summary 
of what you think is the argument against it?


In all of my discussions with you on the subject, you have introduced 
many red herrings, and we have discussed many topics that turned out to 
be just misunderstandings, but you have never addressed the actual core 
argument itself.


In fact, IIRC, on the one occasion that I persisted in trying to bring 
the discussion back to the core issue, you finally made only one 
argument against my core claim  your argument against it was "I just 
don't think it is going to be a problem."


The argument itself is extremely rigorous:  on all the occasions on 
which someone has disputed the rigorousness of the argument, they have 
either addressed some other issue entirely or they have just waved their 
hands without showing any sign of understanding the argument, and then 
said "... it's not rigorous!".  It is almost comical to go back over the 
various responses to the argument:  not only do people go flying off in 
all sorts of bizarre directions, but they also get quite strenuous about 
it at the same time.


Not understanding an argument is not the same as the argument not being 
rigorous.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-29 Thread Richard Loosemore

Brad Paulsen wrote:

Richard,

I think I'll get the older Waldrop book now because I want to learn more 
about the ideas surrounding complexity (and, in particular, its 
association with, and differentiation from, chaos theory) as soon as 
possible.  But, I will definitely put an entry in my Google calendar to 
keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


You're welcome.  I hope it is not a disappointment:  the subject is a 
peculiar one, so I believe that it is better to start off with the kind 
of journalistic overview that Waldrop gives.  Let me know what your 
reaction is.


Here is the bottom line.  At the core of the complex systems idea there 
is something very significant and very powerful, but a lot of people 
have wanted it to lead to a new science just like some of the old 
science.  In other words, they have wanted there to be a new, fabulously 
powerful 'general theory of complexity' coming down the road.


However, no such theory is in sight, and there is one view of complexity 
(mine, for example) that says that there will probably never be such a 
theory.  If this were one of the traditional sciences, the absence of 
that kind of progress toward unification would be a sign of trouble - a 
sign that this was not really a new science after all.  Or, even worse, 
a sign that the original idea was bogus.  But I believe that is the 
wrong interpretation to put on it.  The complexity idea is very 
significant, but it is not a science by itself.


Having said all of that, there are many people who so much want there to 
be a science of complexity (enough of a science that there could be an 
institute dedicated to it, where people have real jobs working on 
'complex systems'), that they are prepared to do a lot of work that 
makes it look like something is happening.  So, you can find many 
abstract papers about complex dynamical systems, with plenty of 
mathematics in them.  But as far as I can see, most of that stuff is 
kind of peripheral ... it is something to do to justify a research program.


At the end of the day, I think that the *core* complex systems idea will 
outlast all this other stuff, but it will become famous for its impact 
on oter sciences, rather than for the specific theories of 'complexity' 
that it generates.



We will see.



Richard Loosemore







Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

"Complexity: Life at the Edge of Chaos"
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 
15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 
called "The Core Ideas of the Sciences of Complexity".  Interesting 
title, given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Bryan Bishop
On Friday 27 June 2008, Richard Loosemore wrote:
> Pardon my fury, but the problem is understanding HOW TO DO IT, and
> HOW TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So
> long as some people on this list repeat this mistake, this list will
> degenerate even further into obsolescence.

I am working on this issue, but it will not look like ai from your 
perspective. It is, in a sense, ai. Here's the tool approach:

http://heybryan.org/buildingbrains.html
http://heybryan.org/exp.html

Sort of.

- Bryan

http://heybryan.org/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
> The argument itself is extremely rigorous:  on all the occasions on which
> someone has disputed the rigorousness of the argument, they have either
> addressed some other issue entirely or they have just waved their hands
> without showing any sign of understanding the argument, and then said "...
> it's not rigorous!".  It is almost comical to go back over the various
> responses to the argument:  not only do people go flying off in all sorts of
> bizarre directions, but they also get quite strenuous about it at the same
> time.

Richard, if your argument is so rigorous, why don't you do this: present
a brief, mathematical formalization of your argument, defining all terms
precisely and carrying out all inference steps exactly, at the level
of a textbook
mathematical proof.

I'll be on vacation for the next 2 weeks w/limited and infrequent email access,
so I'll look out for this when I return.

If you present your argument this way, then you can rest assured I will
understand it, as I'm capable to understand math; then, our arguments can
be more neatly directed ... toward the appropriateness of your formal
definitions and assumptions...

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Richard Loosemore

Ed Porter wrote:

Richard,

Despite your statement to the contrary --- despite your "FURY" --- I did get
your point.  Not everybody beside Richard Loosemore is stupid.  


I understand there have been people making bold promises in AI for over 40
years, and most of them have been based on a gross under estimation of the
problem.  For example, in 1969 Minsky was claiming with then current
minicomputers AI would surpass human intelligence within several years.  


But in 1970, after my year long special study my senior year at Harvard, in
which I read a long reading list Minsky gave me, I came to the conclusion
that Minsky projection seemed rediculous.  I believed human level thinking
required deep experiential knowledge (now called grounding) and that I
seriously doubted anybody could make a human level AI without hardware
capable of storing many terabytes of memory, and the ability to access
significant portions of that memory multiple times a second -- a level of
hardware that is still not available, and that only recently been
approximated at a cost of many tens of millions of dollars.


But on what *basis* did you come to that conclusion?  Your basis was a 
hunch, perhaps?  A shot in the dark?


Did you ever calculate the approximate number of concepts in a human 
brain?  The number of experience events in a lifetime?  The rate of 
chunking?  Any numbers like that?




To date, I am unaware of anyone approaching AGI with the type of hardware
that I have felt for much of the last 38 years would be necessary for human
level AGI.  So don't accuse me of being one of those who has been shown to
have been making false AI promises, because the hardware my predictions have
been based on has never yet been available to AI researchers. 


Since 1970 I have thought that if multiple teams had really powerful
hardware of the type you can buy now for several million dollars (but which
would have cost 20 to 30 times as much just a decade ago), although this
hardware was not capable of human level performance, it would enable very
rapid progress in AI in just ten or twenty years.  


But that was before I became aware of the advances in brain science and AI
that have been made in the last decade or two, advances that have radically
improved and clarified my understanding of they type of computation
architectures needed for various mind functions.  


Now, we actually have good ideas how to address almost all of the known
functions of the mind that we would want an AGI to have. 


For people like Ben, Joscha Bach, Sam Adams, myself, and multiple others IT
IS NOT THAT WE --- as you claim --- "JUST HAVE THIS BELIEF THAT IT WILL
WORK."  --- We have much more.  WE HAVE REASONABLY GOOD EXPLANATIONS FOR HOW
TO PERFORM ALMOST ALL OF THE MENTAL FUNCTIONS OF THE HUMAN MIND THAT WE WANT
AGI'S TO HAVE.


I notice that Ben has recently been dropping hints that he will soon be 
able to show us concrete reasons to believe that he is working on more 
than just belief.


We will see.




It not as if these explanations are totally nailed down, at least in my
mind.  (They may be much better nailed down in Ben's, Joscha Bach's, and Sam
Adams's.)  But I have an idea at a high level how each of them could be made
to work.  This is relatively new, at least for me.  They are complex
multi-level arguments so they cannot be conveyed briefly.  Ben has probably
done a better job of putting his ideas in writing, and his recent post in
this thread promises that relatively shortly he will provide them in much
more detail.  


One example of some of the new reasons for confidence that we are learning
how to design AGI is shown in the amazing success of the Serre-Poggio system
descrived at
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf.
This paper show the tremendous advances that have been made in automatically
learning hierarchical memory, and the power such memory provides in machine
perceptions.  This is not belief.  This a significantly automatically
learned system that works amazingly well for the rapid feed forward part of
visual object recognition.


You are easily impressed by things that look glamorous.  Are you aware 
of the conceptual gulf that lies between the feedforward part of visual 
object recognition and the processes involved in learning AND USING 
structured hierarchies of abstract, domain-independent concepts?  Do you 
 think you could give me a quick summary of why the Serre-Poggio system 
is a believable advance in that much more important issue?


Sigh.



Another reason for optimism is Hintons new work described in papers such as
"Modeling image patches with a directed hierarchy of Markov random fields"
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer.  In the past it would
have been

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Richard Loosemore

Ben Goertzel wrote:

The argument itself is extremely rigorous:  on all the occasions on which
someone has disputed the rigorousness of the argument, they have either
addressed some other issue entirely or they have just waved their hands
without showing any sign of understanding the argument, and then said "...
it's not rigorous!".  It is almost comical to go back over the various
responses to the argument:  not only do people go flying off in all sorts of
bizarre directions, but they also get quite strenuous about it at the same
time.


Richard, if your argument is so rigorous, why don't you do this: present
a brief, mathematical formalization of your argument, defining all terms
precisely and carrying out all inference steps exactly, at the level
of a textbook
mathematical proof.

I'll be on vacation for the next 2 weeks w/limited and infrequent email access,
so I'll look out for this when I return.

If you present your argument this way, then you can rest assured I will
understand it, as I'm capable to understand math; then, our arguments can
be more neatly directed ... toward the appropriateness of your formal
definitions and assumptions...


Mathematics is about formal systems.  The argument is not about formal 
systems, it is about real-world intelligent systems and their 
limitations, and about the very *question* of whether those intelligent 
systems are formal systems.  It is about whether scientific methodology 
(which is just the exercise of a particular subset of this thing we call 
'intelligence') is itself a formal system.  To formulate the argument in 
mathematical terms would, therefore, be to prejudge the answer to the 
question we are addressing - nothing could more silly than to insist on 
a mathematical formulation of it.


Asking for a mathematical formulation of an argument that has nothing to 
do with formal systems is, therefore, a sign that you have no 
understanding of what the argument is actually about.


Now, if it were anyone else I would say that you really did not 
understand, and were just, well ignorant.  But you actually do 
understand that point:  when you made the above request I think your 
goal was to engage in a piece of pure sophistry.  You cynically ask for 
something that you know has no relevance, and cannot be supplied, as an 
attempt at a put-down.  Nice try, Ben.



Or, then again . perhaps I am wrong:  maybe you really *cannot* 
understand anything except math?  Perhaps you have no idea what the 
actual argument is, and that has been the problem all along?  I notice 
that you avoided answering my request that you summarize your argument 
"against" the complex systems problem ... perhaps you are just confused 
about what the argument actually is, and have been confused right from 
the beginning?







Richard Loosemore




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
Richard,

I think that it would be possible to formalize your "complex systems argument"
mathematically, but I don't have time to do so right now.

> Or, then again . perhaps I am wrong:  maybe you really *cannot*
> understand anything except math?

It's not the case that I can only understand math -- however, I have a
lot of respect
for the power of math to clarify disagreements.  Without math, arguments often
proceed in a confused way because different people are defining terms
differently,a
and don't realize it.

But, I agree math is not the only kind of rigor.  I would be happy
with a very careful,
systematic exposition of your argument along the lines of Spinoza or the early
Wittgenstein.  Their arguments were not mathematical, but were very rigorous
and precisely drawn -- not slippery.

> Perhaps you have no idea what the actual
> argument is, and that has been the problem all along?  I notice that you
> avoided answering my request that you summarize your argument "against" the
> complex systems problem ... perhaps you are just confused about what the
> argument actually is, and have been confused right from the beginning?

In a nutshell, it seems you are arguing that general intelligence is
fundamentally founded
on emergent properties of complex systems, and that it's not possible for us to
figure out analytically how these emergent properties emerge from the
lower-level structures
and dynamics of the complex systems involved.   Evolution, you
suggest, "figured out"
some complex systems that give rise to the appropriate emergent
properties to produce
general intelligence.  But evolution did not do this figuring-out in
an analytical way, rather
via its own special sort of "directed trial and error."   You suggest
that to create a generally
intelligent system, we should create a software framework that makes
it very easy to
experiment with  different sorts of complex systems, so that we can
then figure out
(via some combination of experiment, analysis, intuition, theory,
etc.) how to create a
complex system that gives rise to the emergent properties associated
with general
intelligence.

I'm sure the above is not exactly how you'd phrase your argument --
and it doesn't
capture all the nuances -- but I was trying to give a compact and approximate
formulation.   If you'd like to give an alternative, equally compact
formulation, that
would be great.

I think the flaw of your argument lies in your definition of
"complexity", and that this
would be revealed if you formalized your argument more fully.  I think
you define
complexity as a kind of "fundamental irreducibility" that the human
brain does not possess,
and that engineered AGI systems need not possess.  I think that real
systems display
complexity which makes it **computationally difficult** to explain
their emergent properties
in terms of their lower-level structures and dynamics, but not as
fundamentally intractable
as you presume.

But because you don't formalize your notion of complexity adequately,
it's not possible
to engage you in rational argumentation regarding the deep flaw at the
center of your
argument.

However, I cannot prove rigorously that the brain is NOT complex in
the overly strong
sense you  allude it is ... and nor can I prove rigorously that a
design like Novamente Cognition
Engine or OpenCog Prime will give rise to the emergent properties
associated with
general intelligence.  So, in this sense, I don't have a rigorous
refutation of your argument,
and nor would I if you rigorously formalized your argument.

However, I think a rigorous formulation of your argument would make it
apparent to
nearly everyone reading it that your definition of complexity is
unreasonably strong.

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Paper rec: Complex Systems: Network Thinking

2008-06-29 Thread j.k.
While searching for information about the Mitchell book to be published 
in 2009 
, 
which was mentioned in passing by somebody in the last few days, I found 
a paper by the same author that I enjoyed reading and that will probably 
be of interest to others on this list.


The paper is entitled "Complex systems: Network thinking 
", and it was published in 
_Artificial Intelligence_ in 2006. I'd guess that sections 6 and 7 may 
be the starting point for the 2009 book. Section 6 explains three 
natural complex systems: the immune system, foraging and task allocation 
in ant colonies, and cellular metabolism. Section 7 abstracts four 
fundamental principles that Mitchell argues are common to the three 
natural complex systems described and to "intelligence,  self-awareness, 
and self-control in other decentralized systems."


The four principles are:

1. Global information is encoded as statistics and dynamics of patterns 
over the system's components.

2. Randomness and probabilities are essential.
3. The system carries out a fine-grained, parallel search of possibilities.
4. The system exhibits a continual interplay of bottom-up and top-down 
processes.


See the paper for some elaboration of each of the principles and more 
information. It's available at 
.







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-29 Thread Terren Suydam

Hi Richard,

I'll de-lurk here to say that I find this email to be utterly reasonable, and 
that's with my crackpot detectors going off a lot lately, no offense to you of 
course.

I do disagree that complexity is not its own science. I'm not wedded to the 
idea, like the folks you profile in your email, but I think its contribution 
has been small because it's in its infancy. We've been developing reductionist 
tools for hundreds of years now. I think we're in the equivalent of the 
pre-calculus days when it comes to complexity science. 

And we haven't made much progress because the traditional scientific method 
depends on direct causal linkages. On the contrary, complex systems exhibit 
behavior at a global level that is not predictable from the local level... so 
there's a causal relationship only in the weakest sense. It's much more 
straightforward, I think, to say that the two levels, the global and the local, 
are "causally orthogonal" to one another. Both levels can be described by 
completely independent causal dynamics. 

It's a new science because it's a new method. Isolating variables to determine 
relationships doesn't lend itself well to massively parallel networks that are 
just lousy with feedback, because it's impossible to hold the other values 
still, and worse, the behavior is sensitive to experimental noise. You could 
write a book on the difference between the traditional scientific method and 
the methods for studying complexity. I'm sure it's been done, actually. 

The study of complexity will eventually fulfill its potential as a new science, 
because if we are ever to understand the brain and the mind and model them with 
any real precision, it will be due to complexity science *as much as* 
traditional reductionist science. We need the benefit of both to gain real 
understanding where traditional science has failed.

Our human minds are simply too limited to grasp the enormity of the scale of 
complexity within a single cell, much less a collection of a few trillion of 
them, also arranged in an unfathomably complex arrangement.

The idea that complexity science will *not* figure prominently into the study 
of the body, the brain, and the mind, is an absurd proposition to me. We will 
be going in the right direction when more and more of us are simulating 
something without any clue what the result will be.

That's all for now... thanks for your post Richard.

Terren

--- On Sun, 6/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> From: Richard Loosemore <[EMAIL PROTECTED]>
> Subject: Re: [agi] Approximations of Knowledge
> To: agi@v2.listbox.com
> Date: Sunday, June 29, 2008, 9:23 PM
> Brad Paulsen wrote:
> > Richard,
> > 
> > I think I'll get the older Waldrop book now
> because I want to learn more 
> > about the ideas surrounding complexity (and, in
> particular, its 
> > association with, and differentiation from, chaos
> theory) as soon as 
> > possible.  But, I will definitely put an entry in my
> Google calendar to 
> > keep a lookout for the new book in 2009.
> > 
> > Thanks very much for the information!
> > 
> > Cheers,
> > 
> > Brad
> 
> You're welcome.  I hope it is not a disappointment: 
> the subject is a 
> peculiar one, so I believe that it is better to start off
> with the kind 
> of journalistic overview that Waldrop gives.  Let me know
> what your 
> reaction is.
> 
> Here is the bottom line.  At the core of the complex
> systems idea there 
> is something very significant and very powerful, but a lot
> of people 
> have wanted it to lead to a new science just like some of
> the old 
> science.  In other words, they have wanted there to be a
> new, fabulously 
> powerful 'general theory of complexity' coming down
> the road.
> 
> However, no such theory is in sight, and there is one view
> of complexity 
> (mine, for example) that says that there will probably
> never be such a 
> theory.  If this were one of the traditional sciences, the
> absence of 
> that kind of progress toward unification would be a sign of
> trouble - a 
> sign that this was not really a new science after all.  Or,
> even worse, 
> a sign that the original idea was bogus.  But I believe
> that is the 
> wrong interpretation to put on it.  The complexity idea is
> very 
> significant, but it is not a science by itself.
> 
> Having said all of that, there are many people who so much
> want there to 
> be a science of complexity (enough of a science that there
> could be an 
> institute dedicated to it, where people have real jobs
> working on 
> 'complex systems'), that they are prepared to do a
> lot of work that 
> makes it look like something is happening.  So, you can
> find many 
> abstract papers about complex dynamical systems, with
> plenty of 
> mathematics in them.  But as far as I can see, most of that
> stuff is 
> kind of peripheral ... it is something to do to justify a
> research program.
> 
> At the end of the day, I think that the *core* complex
> systems idea wil

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Terren Suydam

Hi Ben,

I don't think the flaw you have identified matters to the main thrust of 
Richard's argument - and if you haven't summarized Richard's position 
precisely, you have summarized mine. :-]

You're saying the flaw in that position is that prediction of complex networks 
might merely be a matter of computational difficulty, rather than fundamentally 
intractability. But any formally defined complex system is going to be 
computable in principle. We can always predict such a system with infinite 
computing power. That doesn't make it tractable, or open to understanding, 
because obviously real understanding can't be dependent infinite computing 
power.

The question of fundamental intractability comes down to the degree with which 
we can make predictions about the global level from the local. And let's hope 
there's progress to be made there because each discovery will make our lives 
easier, to those of us who would try to understand something like the brain or 
the body or even just the cell. Or even just folding proteins!

But it seems pretty obvious to me anyway that we will never be able to predict 
the weather with any precision without doing an awful lot of computation. 

And what is our mind but the weather in our brains?

Terren

--- On Sun, 6/29/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN 
> AGI
> To: agi@v2.listbox.com
> Date: Sunday, June 29, 2008, 10:44 PM
> Richard,
> 
> I think that it would be possible to formalize your
> "complex systems argument"
> mathematically, but I don't have time to do so right
> now.
> 
> > Or, then again . perhaps I am wrong:  maybe you
> really *cannot*
> > understand anything except math?
> 
> It's not the case that I can only understand math --
> however, I have a
> lot of respect
> for the power of math to clarify disagreements.  Without
> math, arguments often
> proceed in a confused way because different people are
> defining terms
> differently,a
> and don't realize it.
> 
> But, I agree math is not the only kind of rigor.  I would
> be happy
> with a very careful,
> systematic exposition of your argument along the lines of
> Spinoza or the early
> Wittgenstein.  Their arguments were not mathematical, but
> were very rigorous
> and precisely drawn -- not slippery.
> 
> > Perhaps you have no idea what the actual
> > argument is, and that has been the problem all along? 
> I notice that you
> > avoided answering my request that you summarize your
> argument "against" the
> > complex systems problem ... perhaps you are just
> confused about what the
> > argument actually is, and have been confused right
> from the beginning?
> 
> In a nutshell, it seems you are arguing that general
> intelligence is
> fundamentally founded
> on emergent properties of complex systems, and that
> it's not possible for us to
> figure out analytically how these emergent properties
> emerge from the
> lower-level structures
> and dynamics of the complex systems involved.   Evolution,
> you
> suggest, "figured out"
> some complex systems that give rise to the appropriate
> emergent
> properties to produce
> general intelligence.  But evolution did not do this
> figuring-out in
> an analytical way, rather
> via its own special sort of "directed trial and
> error."   You suggest
> that to create a generally
> intelligent system, we should create a software framework
> that makes
> it very easy to
> experiment with  different sorts of complex systems, so
> that we can
> then figure out
> (via some combination of experiment, analysis, intuition,
> theory,
> etc.) how to create a
> complex system that gives rise to the emergent properties
> associated
> with general
> intelligence.
> 
> I'm sure the above is not exactly how you'd phrase
> your argument --
> and it doesn't
> capture all the nuances -- but I was trying to give a
> compact and approximate
> formulation.   If you'd like to give an alternative,
> equally compact
> formulation, that
> would be great.
> 
> I think the flaw of your argument lies in your definition
> of
> "complexity", and that this
> would be revealed if you formalized your argument more
> fully.  I think
> you define
> complexity as a kind of "fundamental
> irreducibility" that the human
> brain does not possess,
> and that engineered AGI systems need not possess.  I think
> that real
> systems display
> complexity which makes it **computationally difficult** to
> explain
> their emergent properties
> in terms of their lower-level structures and dynamics, but
> not as
> fundamentally intractable
> as you presume.
> 
> But because you don't formalize your notion of
> complexity adequately,
> it's not possible
> to engage you in rational argumentation regarding the deep
> flaw at the
> center of your
> argument.
> 
> However, I cannot prove rigorously that the brain is NOT
> complex in
> the overly strong
> sense you  allude it is ...

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Terren Suydam

By the way, just wanted to point out a beautifully simple example - perhaps the 
simplest - of an irreducibility in complex systems.

Individual molecular interactions are symmetric in time, they work the same 
forwards and backwards. Yet diffusion, which is nothing more than the aggregate 
of molecular interactions, is asymmetric. Figure that one out.

That's the *kind* of irreducibility that pops up all over complex systems.

Terren

--- On Mon, 6/30/08, Terren Suydam <[EMAIL PROTECTED]> wrote:

> From: Terren Suydam <[EMAIL PROTECTED]>
> Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN 
> AGI
> To: agi@v2.listbox.com
> Date: Monday, June 30, 2008, 1:55 AM
> Hi Ben,
> 
> I don't think the flaw you have identified matters to
> the main thrust of Richard's argument - and if you
> haven't summarized Richard's position precisely,
> you have summarized mine. :-]
> 
> You're saying the flaw in that position is that
> prediction of complex networks might merely be a matter of
> computational difficulty, rather than fundamentally
> intractability. But any formally defined complex system is
> going to be computable in principle. We can always predict
> such a system with infinite computing power. That
> doesn't make it tractable, or open to understanding,
> because obviously real understanding can't be dependent
> infinite computing power.
> 
> The question of fundamental intractability comes down to
> the degree with which we can make predictions about the
> global level from the local. And let's hope there's
> progress to be made there because each discovery will make
> our lives easier, to those of us who would try to
> understand something like the brain or the body or even
> just the cell. Or even just folding proteins!
> 
> But it seems pretty obvious to me anyway that we will never
> be able to predict the weather with any precision without
> doing an awful lot of computation. 
> 
> And what is our mind but the weather in our brains?
> 
> Terren
> 
> --- On Sun, 6/29/08, Ben Goertzel <[EMAIL PROTECTED]>
> wrote:
> 
> > From: Ben Goertzel <[EMAIL PROTECTED]>
> > Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND
> $850K BUYS TODAY FOR USE IN AGI
> > To: agi@v2.listbox.com
> > Date: Sunday, June 29, 2008, 10:44 PM
> > Richard,
> > 
> > I think that it would be possible to formalize your
> > "complex systems argument"
> > mathematically, but I don't have time to do so
> right
> > now.
> > 
> > > Or, then again . perhaps I am wrong:  maybe
> you
> > really *cannot*
> > > understand anything except math?
> > 
> > It's not the case that I can only understand math
> --
> > however, I have a
> > lot of respect
> > for the power of math to clarify disagreements. 
> Without
> > math, arguments often
> > proceed in a confused way because different people are
> > defining terms
> > differently,a
> > and don't realize it.
> > 
> > But, I agree math is not the only kind of rigor.  I
> would
> > be happy
> > with a very careful,
> > systematic exposition of your argument along the lines
> of
> > Spinoza or the early
> > Wittgenstein.  Their arguments were not mathematical,
> but
> > were very rigorous
> > and precisely drawn -- not slippery.
> > 
> > > Perhaps you have no idea what the actual
> > > argument is, and that has been the problem all
> along? 
> > I notice that you
> > > avoided answering my request that you summarize
> your
> > argument "against" the
> > > complex systems problem ... perhaps you are just
> > confused about what the
> > > argument actually is, and have been confused
> right
> > from the beginning?
> > 
> > In a nutshell, it seems you are arguing that general
> > intelligence is
> > fundamentally founded
> > on emergent properties of complex systems, and that
> > it's not possible for us to
> > figure out analytically how these emergent properties
> > emerge from the
> > lower-level structures
> > and dynamics of the complex systems involved.  
> Evolution,
> > you
> > suggest, "figured out"
> > some complex systems that give rise to the appropriate
> > emergent
> > properties to produce
> > general intelligence.  But evolution did not do this
> > figuring-out in
> > an analytical way, rather
> > via its own special sort of "directed trial and
> > error."   You suggest
> > that to create a generally
> > intelligent system, we should create a software
> framework
> > that makes
> > it very easy to
> > experiment with  different sorts of complex systems,
> so
> > that we can
> > then figure out
> > (via some combination of experiment, analysis,
> intuition,
> > theory,
> > etc.) how to create a
> > complex system that gives rise to the emergent
> properties
> > associated
> > with general
> > intelligence.
> > 
> > I'm sure the above is not exactly how you'd
> phrase
> > your argument --
> > and it doesn't
> > capture all the nuances -- but I was trying to give a
> > compact and approximate
> > formulation.   If you'd like to give an
> alternative,
> > equally com

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
But, we don't need to be able to predict the thoughts of an AGI system
in detail, to be able to architect an AGI system that has thoughts...

I agree that predicting the thoughts of an AGI system in detail is
going to be pragmatically impossible ... but I don't agree that
predicting **which** AGI designs can lead to the emergent properties
corresponding to general intelligence, is pragmatically impossible to
do in an analytical and rational way ...

Similarly, I could engineer an artificial weather system displaying
hurricanes, whirlpools, or whatever phenomena you ask me for -- based
on my general understanding of the Navier-stokes equation.   Even
though I could not, then, predict the specific dynamics of those
hurricanes, whirlpools, etc.

We lack the equivalent of the Navier-stokes equation for thoughts.
But we can still arrive at reasonable analytic understandings of
appropriately constrained and formalised AGI designs, with the power
to achieve general intelligence...

ben g

On Mon, Jun 30, 2008 at 1:55 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Hi Ben,
>
> I don't think the flaw you have identified matters to the main thrust of 
> Richard's argument - and if you haven't summarized Richard's position 
> precisely, you have summarized mine. :-]
>
> You're saying the flaw in that position is that prediction of complex 
> networks might merely be a matter of computational difficulty, rather than 
> fundamentally intractability. But any formally defined complex system is 
> going to be computable in principle. We can always predict such a system with 
> infinite computing power. That doesn't make it tractable, or open to 
> understanding, because obviously real understanding can't be dependent 
> infinite computing power.
>
> The question of fundamental intractability comes down to the degree with 
> which we can make predictions about the global level from the local. And 
> let's hope there's progress to be made there because each discovery will make 
> our lives easier, to those of us who would try to understand something like 
> the brain or the body or even just the cell. Or even just folding proteins!
>
> But it seems pretty obvious to me anyway that we will never be able to 
> predict the weather with any precision without doing an awful lot of 
> computation.
>
> And what is our mind but the weather in our brains?
>
> Terren
>
> --- On Sun, 6/29/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> From: Ben Goertzel <[EMAIL PROTECTED]>
>> Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE 
>> IN AGI
>> To: agi@v2.listbox.com
>> Date: Sunday, June 29, 2008, 10:44 PM
>> Richard,
>>
>> I think that it would be possible to formalize your
>> "complex systems argument"
>> mathematically, but I don't have time to do so right
>> now.
>>
>> > Or, then again . perhaps I am wrong:  maybe you
>> really *cannot*
>> > understand anything except math?
>>
>> It's not the case that I can only understand math --
>> however, I have a
>> lot of respect
>> for the power of math to clarify disagreements.  Without
>> math, arguments often
>> proceed in a confused way because different people are
>> defining terms
>> differently,a
>> and don't realize it.
>>
>> But, I agree math is not the only kind of rigor.  I would
>> be happy
>> with a very careful,
>> systematic exposition of your argument along the lines of
>> Spinoza or the early
>> Wittgenstein.  Their arguments were not mathematical, but
>> were very rigorous
>> and precisely drawn -- not slippery.
>>
>> > Perhaps you have no idea what the actual
>> > argument is, and that has been the problem all along?
>> I notice that you
>> > avoided answering my request that you summarize your
>> argument "against" the
>> > complex systems problem ... perhaps you are just
>> confused about what the
>> > argument actually is, and have been confused right
>> from the beginning?
>>
>> In a nutshell, it seems you are arguing that general
>> intelligence is
>> fundamentally founded
>> on emergent properties of complex systems, and that
>> it's not possible for us to
>> figure out analytically how these emergent properties
>> emerge from the
>> lower-level structures
>> and dynamics of the complex systems involved.   Evolution,
>> you
>> suggest, "figured out"
>> some complex systems that give rise to the appropriate
>> emergent
>> properties to produce
>> general intelligence.  But evolution did not do this
>> figuring-out in
>> an analytical way, rather
>> via its own special sort of "directed trial and
>> error."   You suggest
>> that to create a generally
>> intelligent system, we should create a software framework
>> that makes
>> it very easy to
>> experiment with  different sorts of complex systems, so
>> that we can
>> then figure out
>> (via some combination of experiment, analysis, intuition,
>> theory,
>> etc.) how to create a
>> complex system that gives rise to the emergent properties
>> associated
>> with general
>> intelligence.
>>
>>

Re: [agi] Approximations of Knowledge

2008-06-29 Thread Brad Paulsen

Richard,

Thanks for your comments.  Very interesting.  I'm looking forward to reading the 
"introductory" book by Waldrop.  Thanks again!


Cheers,

Brad


Richard Loosemore wrote:

Brad Paulsen wrote:

Richard,

I think I'll get the older Waldrop book now because I want to learn 
more about the ideas surrounding complexity (and, in particular, its 
association with, and differentiation from, chaos theory) as soon as 
possible.  But, I will definitely put an entry in my Google calendar 
to keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


You're welcome.  I hope it is not a disappointment:  the subject is a 
peculiar one, so I believe that it is better to start off with the kind 
of journalistic overview that Waldrop gives.  Let me know what your 
reaction is.


Here is the bottom line.  At the core of the complex systems idea there 
is something very significant and very powerful, but a lot of people 
have wanted it to lead to a new science just like some of the old 
science.  In other words, they have wanted there to be a new, fabulously 
powerful 'general theory of complexity' coming down the road.


However, no such theory is in sight, and there is one view of complexity 
(mine, for example) that says that there will probably never be such a 
theory.  If this were one of the traditional sciences, the absence of 
that kind of progress toward unification would be a sign of trouble - a 
sign that this was not really a new science after all.  Or, even worse, 
a sign that the original idea was bogus.  But I believe that is the 
wrong interpretation to put on it.  The complexity idea is very 
significant, but it is not a science by itself.


Having said all of that, there are many people who so much want there to 
be a science of complexity (enough of a science that there could be an 
institute dedicated to it, where people have real jobs working on 
'complex systems'), that they are prepared to do a lot of work that 
makes it look like something is happening.  So, you can find many 
abstract papers about complex dynamical systems, with plenty of 
mathematics in them.  But as far as I can see, most of that stuff is 
kind of peripheral ... it is something to do to justify a research program.


At the end of the day, I think that the *core* complex systems idea will 
outlast all this other stuff, but it will become famous for its impact 
on oter sciences, rather than for the specific theories of 'complexity' 
that it generates.



We will see.



Richard Loosemore







Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

"Complexity: Life at the Edge of Chaos"
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - 
Feb 15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 
called "The Core Ideas of the Sciences of Complexity".  Interesting 
title, given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com