Eugen> Trust me, the speed is. Your biggest problem is memory
Eugen> bandwidth, actually.

Well, on this we differ. I can appreciate how you might think memory
bandwidth was important for some tasks, although I don't, but
I'm curious why you think its important for planning problems like
Sokoban or Go, or a new planning game I present your AI on the fly,
or whether you think whatever your big memory intensive
approach is will solve those.

I agree with Eugen that, regarding hardware limitations, memory
bandwidth is the critical one for AGI.

I don't know much about Sokoban, but regarding Go, it may be the case
that there is a lot of value to a Go-playing program having rapid and
flexible access to a huge in-RAM database of patterns extracted from
already-experienced Go games.

In any application involving learning, long-term memory and real-time
activity, memory access is going to be a major bottleneck.

In Novamente, reasoning involves constant accesses into long-term
memory searching for relevant knowledge, so that more time is spent
looking things up in memory than doing number-crunching or logic
operations....

This is different from an algorithm like alpha-beta pruning, and
different from graphics rendering algorithms, which are all
processor-intensive rather than memory-bandwidth-intensive.

Ben> However, evolution is not doing software design using anywhere
Ben> near the same process that we human scientists are.  So I don't
Ben> think these sorts of calculations [of evolution's computational power]
Ben> are very directly relevant...

As you know, I argued that the problem of designing the relevant software
is NP-hard at least, so it is not clear that it can be cracked without
employing massive computation in its design, anymore than a team of
experts could solve a large TSP problem by hand.

However, I have an open mind on this, which I regard as the critical
issue for AGI.

Yes, this is a critical issue!

There are also a lot of subtleties to your argument.

Suppose it is very hard to design the relevant software for AGI, and
suppose that some of the trickier aspects needed are shared among both
digital-computer AGIs and human brains.  Then perhaps, when humans are
designing AGI systems according to a combination of science and
intuition, their intuitions are making use of the knowledge about AGI
architecture implicit in the human brain.

I.e., to an extent,

*Evolution put AGI in the human brain

*Human intuition is helping design AGI's for digital computers

*Human intuition is implicitly drawing on its own reflective knowledge
of its own structure and dynamics, in guiding AGI design

So, even if you're right that AGI is a very very hard design problem
to approach using science alone, you need to acknowledge that we are
approaching it using a combination of
-- science, with
-- intuition that is guided by implicit reflective knowledge of what
general intelligence is like.

Now, you may argue that this knowledge is not at the right level --
that our intuitions don't guide us in the ways that would be important
for getting AGI right, etc.  I personally think that some of our
implicit intuitions about cognition are useful for AGI and represent
valuable lessons implicit in our own AGI architectures, whereas other
of our intuitions are illusions --- and telling the valuable
intuitions from the illusions is not very easy....

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to