Hi Ed You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by 'complexity' (as opposed to the common usage of complex meaning difficult). It is not the *number* of calculations or interconnects that gives rise to complexity or chaos, but their nature. E.g. calculating the eigen-values of a n=10^10000 matrix is *very* difficult but not complex. So the large matrix calculations, map-reduces or BleuGene configuration are very simple. A map-reduce or matrix calculation is typically one line of code (at least in Python - which is where Google probably gets the idea from :)
To make them complex, you need to go beyond. E.g. a 500K-node 3 layer neural network is simplistic (not simple:), chaining only 10K NNs together (each with 10K input/outputs) in a random network (with only a few of these NNs serving as input or output modules) would produce complex behaviour, especially if for each iteration, the input vector changes dynamically. Note that the latter has FAR FEWER interconnects i.e. would need much fewer calculations but its behaviour would be impossible to predict (you can only simulate it) whereas the behaviour of the 500K is much more easily understood. BlueGene has a simple architecture, a network of computers who do mainly the same thing (e.g the GooglePlex) has predictive behaviour, however if each computer acts/behaves very differently (I guess on the internet we could classify users into a number of distinct agent-like behaviours), you'll get complex behaviour. It's the difference in complexity between a 8Gbit RAM chip and say an old P3 CPU chip. The latter has less than one-hundredth of the transistors but is far more complex and displays interesting behaviour, the former doesn't. Jean-Paul >>> On 2007/12/05 at 23:12, in message <[EMAIL PROTECTED]>, "Ed Porter" <[EMAIL PROTECTED]> wrote: > Yes, my vision of a human AGI would be a very complex machine. Yes, > a lot of its outputs could only be made with human level reasonableness > after a very large amount of computation. I know of no shortcuts around the > need to do such complex computation. So it arguably falls in to what you > say Wolfram calls "computational irreducibility." > But the same could be said for any of many types of computations, > such as large matrix equations or Google's map-reduces, which are routinely > performed on supercomputers. > So if that is how you define irreducibility, its not that big a > deal. It just means you have to do a lot of computing to get an answer, > which I have assumed all along for AGI (Remember I am the one pushing for > breaking the small hardware mindset.) But it doesn't mean we don't know how > to do such computing or that we have to do a lot more complexity research, > of the type suggested in your paper, before we can successfully designing > AGIs. [...] > Although it is easy to design system where the systems behavior > would be sufficiently chaotic that such design would be impossible, it seems > likely that it is also possible to design complex system in which the > behavior is not so chaotic or unpredictable. Take the internet. Something > like 10^8 computers talk to each other, and in general it works as designed. > Take IBM's supercomputer BlueGene L, 64K dual core processor computer each > with at least 256MBytes all capable of receiving and passing messages at > 4Ghz on each of over 3 dimensions, and capable of performing 100's of > trillions of FLOP/sec. Such a system probably contains at least 10^14 > non-linear separately functional elements, and yet it works as designed. If > there is a global-local disconnect in the BlueGene L, which there could be > depending on your definition, it is not a problem for most of the > computation it does. -- Research Associate: CITANDA Post-Graduate Section Head Department of Information Systems Phone: (+27)-(0)21-6504256 Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21 ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=73082928-3b96d2