Eugen> Trust me, the speed is. Your biggest problem is memory
Eugen> bandwidth, actually.
>> Well, on this we differ. I can appreciate how you might think
>> memory bandwidth was important for some tasks, although I don't,
>> but I'm curious why you think its important for planning problems
>> like Sokoban or Go, or a new planning game I present your AI on the
>> fly, or whether you think whatever your big memory intensive
>> approach is will solve those.

Ben> I agree with Eugen that, regarding hardware limitations, memory
Ben> bandwidth is the critical one for AGI.

Ben> I don't know much about Sokoban, but regarding Go, it may be the
Ben> case that there is a lot of value to a Go-playing program having
Ben> rapid and flexible access to a huge in-RAM database of patterns
Ben> extracted from already-experienced Go games.

A few years ago, Igor and I actually attempted to produce a Go program
based on a vast store of patterns taken from games. To do this, we built
a network of 32 machines, each of which had 4 gig of RAM (they were
32 bit machines and this was as much as could be loaded easily at the 
time). We analyzed a data base of many human-games, hundreds of
thousands, to cut out the patterns.
A case could be made that we failed because of memory access--
if we'd had an order or a few orders of magnitude more data and more
stored patterns, things would have worked; but that's not my sense.
The hard problem was figuring out when a pattern was similar to
another and what its value was etc, was finding the right code for
analyzing the meaning, not the memory bandwidth.
(There's powerpoing on my website about this experiment, unfortunately
we never wrote more about the attempt.)

We did this because we hoped to overcome the hard problem of finding
the right code, by throwing memory bandwidth and patterns at the
problem. This is analogous to looking for the key under the light,
because its where you can hope to find it, which makes sense but
doesn't always work. Its my sense that what you are doing may be
similar.

I'm confident that if we had the right code for analyzing patterns
and for recognizing meaningful similarity and for manipulating the
meaning extracted from stored patterns, todays machines would have
plenty of memory bandwidth to solve Go.

Ben> In any application involving learning, long-term memory and
Ben> real-time activity, memory access is going to be a major
Ben> bottleneck.

Ben> In Novamente, reasoning involves constant accesses into long-term
Ben> memory searching for relevant knowledge, so that more time is
Ben> spent looking things up in memory than doing number-crunching or
Ben> logic operations....

Ben> This is different from an algorithm like alpha-beta pruning, and
Ben> different from graphics rendering algorithms, which are all
Ben> processor-intensive rather than memory-bandwidth-intensive.

Ben> However, evolution is not doing software design using anywhere
Ben> near the same process that we human scientists are.  So I don't
Ben> think these sorts of calculations [of evolution's computational
Ben> power] are very directly relevant...
>> As you know, I argued that the problem of designing the relevant
>> software is NP-hard at least, so it is not clear that it can be
>> cracked without employing massive computation in its design,
>> anymore than a team of experts could solve a large TSP problem by
>> hand.
>> 
>> However, I have an open mind on this, which I regard as the
>> critical issue for AGI.

Ben> Yes, this is a critical issue!

Ben> There are also a lot of subtleties to your argument.

Ben> Suppose it is very hard to design the relevant software for AGI,
Ben> and suppose that some of the trickier aspects needed are shared
Ben> among both digital-computer AGIs and human brains.  Then perhaps,
Ben> when humans are designing AGI systems according to a combination
Ben> of science and intuition, their intuitions are making use of the
Ben> knowledge about AGI architecture implicit in the human brain.

Ben> I.e., to an extent,

Ben> *Evolution put AGI in the human brain

Ben> *Human intuition is helping design AGI's for digital computers

Ben> *Human intuition is implicitly drawing on its own reflective
Ben> knowledge of its own structure and dynamics, in guiding AGI
Ben> design

Ben> So, even if you're right that AGI is a very very hard design
Ben> problem to approach using science alone, you need to acknowledge
Ben> that we are approaching it using a combination of -- science,
Ben> with -- intuition that is guided by implicit reflective knowledge
Ben> of what general intelligence is like.

Ben> Now, you may argue that this knowledge is not at the right level
Ben> -- that our intuitions don't guide us in the ways that would be
Ben> important for getting AGI right, etc.  I personally think that
Ben> some of our implicit intuitions about cognition are useful for
Ben> AGI and represent valuable lessons implicit in our own AGI
Ben> architectures, whereas other of our intuitions are illusions ---
Ben> and telling the valuable intuitions from the illusions is not
Ben> very easy....

I'm proceeding in the same way, utilizing reflective knowledge,
and I recognize it may be sufficient, but I'm far from convinced.

Ben> -- Ben G

Ben> ------- To unsubscribe, change your address, or temporarily
Ben> deactivate your subscription, please go to
Ben> http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to