Hi,

Just out of curiosity - would you mind sharing your hardware estimates
with the list? I would personally find that fascinating.

Mant thanks,

Stefan


Well, here is one way to slice it... there are many, of course...

Currently the bottleneck for Novamente's cognitive processing is the
"quasi-evolutionary" generation of complex patterns and procedures.  I
estimate that to do powerful AGI we need to be able to rapidly learn
procedures whose implementations in Novamente's internal Combo
programming language involve 400 "function nodes".  (This is because
to implement one of Novamente's cognitive control procedures in Combo
would take about this many function nodes -- so this is the threshold
required for really powerful cognitive self-modification.)

Now, one way to learn a Combo program with 400 nodes is to exploit
modularity, and learn a program that (to be very rough about it)
consists of a 20-node coordinating program integrating 20 different
small 20-node modules.  Of course this is only one way to break things
down but it's OK for a heuristic start at a calculation....  (There
are lots of cog-sci reasons to believe that minds should
exploit/assume modularity in learning complex patterns/procedures.)

Suppose, for sake of calculation, that 1 very good machine gives
acceptable performance now for learning a 20 node Combo program, based
on evaluating 10K candidate programs and then doing some reasoning
along the way.   This is not accurate for the current NM
implementation but seems feasible given some further development along
well-understood lines.

Now, imagine a very inefficient learning approach for the 400 node
program, in which one iteratively

-- evaluates a candidate for the 20-node coordinating program
-- for each such candidate, one does a learning run to learn a
candidate for each of the 20 subprograms required by the coordinating
program

In this unrealistically inefficient learning approach, each time one
evaluates some candidate for the 20-node coordinating program, one
potentially has to learn 20 other 20-node programs.  This means 20 *
10K = 100K machines would be required in order to do learning of the
modular 400 node program at the same speed as 1 machine takes for a 20
node program (which was assumed acceptable).

However, the modular learning approach outlined above is really dumb
(compared even to the things we're doing in NM now, whose time
complexity is harder to predict though) and can almost surely be sped
up by a couple orders of magnitude.  If we assume this we're down to
1000 machines or so...  ;-)

Anyway, it's clear that the requirements are not a million machines,
given these feasible assumptions.  We are definitely within the scope
of contemporary supercomputer, and potentially if our algorithms are
clever enough within the scope of a cluster of a few hundred PC's.

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to