Well, as one ignoramus speaking to another (hopefully the smart ones on the 
list will correct us) I think not. It's not the random inputs (no intelligence 
or complex system can deal with randomness and turn it into something 
meaningful - just like random walk share prices would mean you cannot 
consistently beat the stock market :) that make a system complex. It is at 
least its structure and internal diversity as well as challenging environment. 
Like so much, true complexity (both behaviour or structure) is likely to exist 
on/walk a very fine line between 'boring' and 'random' (see Wolfram et al or 
anything fractal) which means randomly linking up modules is IMHO not likely to 
give rise to AGI (one of my criticisms against traditional connectionism) and 
completely random environments are also not likely to give rise to AGI either. 
You need a well-configured (through evolution - Ben?, design - my approach - or 
experimentation/systematic exploration - Richard's) modular system 
working/growing/learning/evolving in a rich i.e. not boring but not random 
either environment. Randomness just produces static or noise (i.e. more 
randomness). {The parallel with data is too obvious not to point out: neither 
completely random data nor highly regular (e.g. infinitely repeating) data 
contain much information, the interesting stuff is inbetween those two extremes}
So there is no(t necessarily any) complexity hiding in 'difficult algorithms', 
'complex mathematics', 'random data', 'large datasets', etc. Solving a system 
of 10000000 linear equations is simple, solving two or three quadratic 
differential equations is complex. A map-reduce (assuming a straightforward 
transform function) of 20TB of data may require lots of computing power but is 
far less complex than running ALife on your PDA.
The reason why I responded to this post is that my AGI architecture is relying 
very much on the emerging complexity arising from a "moderately massively 
modular" design which seems to be  queried by many on this list but one of the 
more mainstream hypotheses (tho not necessarily the dominant one) in CogSci (eg 
Carruthers). (Aslo note that for CogScientists 'massive modular' is quite 
significantly less massive than what it means to CompScientists :)
 
=Jean-Paul
 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

>>> "Mike Tintner" <[EMAIL PROTECTED]> 2007/12/06 14:05 >>>

JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.) 
mean by 'complexity' (as opposed to the common usage of complex meaning 
difficult).

Well, I as an ignoramus, was wondering about this - so thankyou. And it 
wasn't clear at all to me from Richard's paper what he meant. What I'm 
taking out from your account is that it involves random inputs...? Is there 
a fuller account of it? Is it the random dimension that he/others hope will 
produce emergent/human-like behaviour? (..because if so, I'd disagree - I'd 
argue the complications of human behaviour flow from conflict/ conflicting 
goals - which happens to be signally missing from his (and cognitive 
science's) ideas about emotions).


-----
This list is sponsored by AGIRI: http://www.agiri.org/email 
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73141768-c30744

Reply via email to