--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> > --- Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> >> Could you describe a piece of technology that simultaneously:
> >> - Is required for AGI.
> >> - Cannot be required part of any useful narrow AI.
> 
> > A one million CPU cluster.
> 
> Are you claiming that computational power of human brain is equivalent
> to one million CPU cluster?
> 
> My feeling is that human's brain computational power is about the same
> as of modern PC.
> 
> AGI software is the missing part of AGI, not hardware.

We don't know that.  What we do know is that people have historically
underestimated the difficulty of AI since about 1950.  Our approach has always
been to design algorithms that push the limits of whatever hardware capacity
was available at the time.  At every point in history we seem to have the
hindsight to realize that past attempts have failed for lack of computing
power but not the foresight to realize when we are still in the same
situation.  If AGI is possible with one millionth of the computing power of
human brain, then 

1. Why didn't we evolve insect sized brains?
2. Why aren't insects as smart as we are?
3. Why aren't our computers as smart as insects?

With regard to 1, the human brain accounts for most of our resting metabolism.
 It uses more power than any other organ except the muscles during exercise.

One of the arguments that AGI is possible on a PC is from information theory. 
Humans learn language from the equivalent of about 1 GB of training data (or
10^9 bits compressed).  Turing argued in 1950 that a learning algorithm
running on a computer with 10^9 bits of memory and educated like a child
should pass the imitation game.  Likewise, Landauer estimated human long term
memory capacity to be 10^9 bits.

Yet a human brain has 10^11 neurons and 10^15 synapses.  Why?

And some of the Blue Brain research suggests it is even worse.  A mouse
cortical column of 10^5 neurons is about 10% connected, but the neurons are
arranged such that connections can be formed between any pair of neurons. 
Extending this idea to the human brain, with 10^6 columns of 10^5 neurons
each, each column should be modeled as a 10^5 by 10^5 sparse matrix, 10%
filled.  This model requires about 10^16 bits.

Perhaps there are ways to optimize neural networks by taking advantage of the
reliability of digital hardware, but over the last few decades researchers
have not found any.  Approaches that reduce the number of neurons or synapses,
such as connectionist systems and various weighted graphs, just haven't scaled
well.  Yes, I know Novamente and NARS fall into this category.

For narrow AI applications, we can usually find better algorithms than neural
networks, for example, arithmetic, deductive logic, or playing chess.  But
none of these other algorithms are so broadly applicable to so many different
domains such as language, speech, vision, robotics, etc.

My work in text compression (an AI problem) is an attempt to answer the
question by measuring trends in intelligence (compression) as a function of
CPU and memory.  The best algorithms model mostly at the lexical level (the
level of a 1 year old child) with only a crude model of semantics and no
syntax.  Memory is so tightly constrained (at 2 GB) that modeling at a higher
level is mostly pointless.  The slope of compression surface in speed/memory
space is steep along the memory axis.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=69282304-388e06

Reply via email to