[agi] Lets count neurons

2007-11-29 Thread Dennis Gorelik
Matt,


> And some of the Blue Brain research suggests it is even worse.  A mouse
> cortical column of 10^5 neurons is about 10% connected,

What does mean 10% connected?
How many connections does average mouse neuron have?
1?

> but the neurons are arranged such that connections can be formed
> between any pair of neurons.  Extending this idea to the human brain, with 
> 10^6 columns of 10^5 neurons
> each, each column should be modeled as a 10^5 by 10^5 sparse matrix,

Only poor design would require "10^5 by 10^5 matrix" if every neuron
has to connect only to 1 other neurons.

One pointer to 2^17 (131072) address space requires 17 bits.
1 connections require 17 bits.
If we want to put 4 bit weighting scale on every connection, then it
would be 85000 bytes.
85000 * 1 neurons = 8.5 * 10^9 bytes = 8.5 GB (hard disks of that
size were available on PCs ~10 years ago).


But in fact mouse's brain does way more than AI has to do.
For example, mouse has strong image and sound recognition ability.
AGI doesn't require that.
Mouse has to manage its muscles in a very high pace.
AGI doesn't need that.
All these unnecessary features consume lion's share of mouse brain.
Mouse must function in way more stressful environment, than AGI must.
That again makes mouse brain bigger than AGI has to be.


> Perhaps there are ways to optimize neural networks by taking advantage of the
> reliability of digital hardware, but over the last few decades researchers
> have not found any.

Researchers have not found appropriate intelligent algorithms. That
doesn't mean that hardware is not sufficient.

> For narrow AI applications, we can usually find better algorithms than neural
> networks, for example, arithmetic, deductive logic, or playing chess.  But
> none of these other algorithms are so broadly applicable to so many different
> domains such as language, speech, vision, robotics, etc.

Do you imply that intelligent algorithm must be universal across
"language, speech, vision, robotics, etc"?
In humans it's just not the case.
Different algorithms are responsible for vision, speech, language,
body control etc.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70650748-f0eed8


Re: [agi] Lets count neurons

2007-11-30 Thread Matt Mahoney

--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> 
> > And some of the Blue Brain research suggests it is even worse.  A mouse
> > cortical column of 10^5 neurons is about 10% connected,
> 
> What does mean 10% connected?
> How many connections does average mouse neuron have?
> 1?

According to the Blue Brain project, 8000 synapses per neuron.  The simulation
used 6300.  The 1 hour video at
http://video.google.com/videoplay?docid=-2874207418572601262 gave a talk on
the project.  According to the presentation, every axon in a cortical column
very closely approaches dendrites from every other neuron about one synapse
width away, implying that a connection could potentially form.  About 10% were
actually connected with synapses.

> > but the neurons are arranged such that connections can be formed
> > between any pair of neurons.  Extending this idea to the human brain, with
> 10^6 columns of 10^5 neurons
> > each, each column should be modeled as a 10^5 by 10^5 sparse matrix,
> 
> Only poor design would require "10^5 by 10^5 matrix" if every neuron
> has to connect only to 1 other neurons.
> 
> One pointer to 2^17 (131072) address space requires 17 bits.
> 1 connections require 17 bits.
> If we want to put 4 bit weighting scale on every connection, then it
> would be 85000 bytes.
> 85000 * 1 neurons = 8.5 * 10^9 bytes = 8.5 GB (hard disks of that
> size were available on PCs ~10 years ago).

Using pointers saves memory but sacrifices speed.  Random memory access is
slow due to cache misses.  By using a matrix, you can perform vector
operations very fast in parallel using SSE2 instructions on modern processors,
or a GPU.  By your own calculations, an array only takes twice as much space
as a graph.

> Do you imply that intelligent algorithm must be universal across
> "language, speech, vision, robotics, etc"?
> In humans it's just not the case.
> Different algorithms are responsible for vision, speech, language,
> body control etc.

Neural networks are useful for all of these problems.  Few other algorithms
have that property.  But that really shouldn't be surprising, considering we
are simulating something already done by neurons.  The experiments done with
neural networks (mostly in the 1980's) confirms the basic architecture, in
particular Hebb's rule, postulated in 1949 but not fully confirmed in animals
even today.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70910879-ed86c9


Re: [agi] Lets count neurons

2007-11-30 Thread Bob Mottram
On 30/11/2007, Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> For example, mouse has strong image and sound recognition ability.
> AGI doesn't require that.
> Mouse has to manage its muscles in a very high pace.
> AGI doesn't need that.


I'm not convinced that it is yet possible to make categorical
assertions of this kind.  It could well turn out that spatial
representations derived from visual processing are essential to some
kinds of thought and analogy, without which an AGI would be
cognitively impaired.  I know it's claimed that many mathematicians
are supposed to possess enhanced spatial reasoning ability.

Brains fundamentally evolved to move creatures around - an internal
guidance mechanism - and there's a close relationship between movement
and perception.  Whether this will need to also be the case for an AGI
I don't know.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70679961-339be6


Re: [agi] Lets count neurons

2007-11-30 Thread Bob Mottram
If you want to see what cortical columns actually look loke, see
http://brainmaps.org  In some of these images you can clearly see what
appear to be neurons stacked on top of each other perpendicular to the
cortical plane.  However, unlike the architecture of a computer there
are no clear divisions between functional units.

On the functional homogeneity or otherwise it's probably too early to
say, but there do appear to be substantial similarities between tasks
such as visual and auditory processing.  Both of these involve a kind
of spatio-temporal analysis of the incoming data stream.  The cortex
at least appears fairly homogeneous, but it's not until we're able to
get detailed structural models for entire brains that we'll be able to
see the fine wiring and make statements such as "vision involves
different circuitry from speech".



On 30/11/2007, Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> Matt,
>
>
> > And some of the Blue Brain research suggests it is even worse.  A mouse
> > cortical column of 10^5 neurons is about 10% connected,
>
> What does mean 10% connected?
> How many connections does average mouse neuron have?
> 1?
>
> > but the neurons are arranged such that connections can be formed
> > between any pair of neurons.  Extending this idea to the human brain, with 
> > 10^6 columns of 10^5 neurons
> > each, each column should be modeled as a 10^5 by 10^5 sparse matrix,
>
> Only poor design would require "10^5 by 10^5 matrix" if every neuron
> has to connect only to 1 other neurons.
>
> One pointer to 2^17 (131072) address space requires 17 bits.
> 1 connections require 17 bits.
> If we want to put 4 bit weighting scale on every connection, then it
> would be 85000 bytes.
> 85000 * 1 neurons = 8.5 * 10^9 bytes = 8.5 GB (hard disks of that
> size were available on PCs ~10 years ago).
>
>
> But in fact mouse's brain does way more than AI has to do.
> For example, mouse has strong image and sound recognition ability.
> AGI doesn't require that.
> Mouse has to manage its muscles in a very high pace.
> AGI doesn't need that.
> All these unnecessary features consume lion's share of mouse brain.
> Mouse must function in way more stressful environment, than AGI must.
> That again makes mouse brain bigger than AGI has to be.
>
>
> > Perhaps there are ways to optimize neural networks by taking advantage of 
> > the
> > reliability of digital hardware, but over the last few decades researchers
> > have not found any.
>
> Researchers have not found appropriate intelligent algorithms. That
> doesn't mean that hardware is not sufficient.
>
> > For narrow AI applications, we can usually find better algorithms than 
> > neural
> > networks, for example, arithmetic, deductive logic, or playing chess.  But
> > none of these other algorithms are so broadly applicable to so many 
> > different
> > domains such as language, speech, vision, robotics, etc.
>
> Do you imply that intelligent algorithm must be universal across
> "language, speech, vision, robotics, etc"?
> In humans it's just not the case.
> Different algorithms are responsible for vision, speech, language,
> body control etc.
>
>
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70674002-45ee58


Re[2]: [agi] Lets count neurons

2007-11-30 Thread Dennis Gorelik
Matt,

> Using pointers saves memory but sacrifices speed.  Random memory access is
> slow due to cache misses.  By using a matrix, you can perform vector
> operations very fast in parallel using SSE2 instructions on modern processors,
> or a GPU.

I doubt it.
http://en.wikipedia.org/wiki/SSE2 - doesn't even mention "parallel" or
"matrix".

Whatever performance advantages SSE2 provide -- they will benefit both
architectures.

> By your own calculations, an array only takes twice as much space
> as a graph.

My own calculations used 4 bits allocated for weights.
You compare the size with the matrix that has either connection or
absence of connection (1 bit).

Actual difference in size would be 10 times, since your matrix is only
10% filled.

>> Do you imply that intelligent algorithm must be universal across
>> "language, speech, vision, robotics, etc"?
>> In humans it's just not the case.
>> Different algorithms are responsible for vision, speech, language,
>> body control etc.

> Neural networks are useful for all of these problems.

HTML is useful for all sorts of web sites.
Does it mean that all web sites are the same?

Yes, some sort of neural network should probably be use for language,
voice, vision, and robotics modules.
But that doesn't mean that the implementation would be the same for
all of them. The differences would probably be quite big.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71107602-dff5dd


Re: Re[2]: [agi] Lets count neurons

2007-12-01 Thread Matt Mahoney

--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> > Using pointers saves memory but sacrifices speed.  Random memory access is
> > slow due to cache misses.  By using a matrix, you can perform vector
> > operations very fast in parallel using SSE2 instructions on modern
> processors,
> > or a GPU.
> 
> I doubt it.
> http://en.wikipedia.org/wiki/SSE2 - doesn't even mention "parallel" or
> "matrix".

It also doesn't mention that one instruction performs 8 16-bit signed multiply
accumulates in parallel, or various other operations: 16 x 8 bits, 8 x 16
bits, 4 x 32 bits (int or float), or 2 x 64 bit (double) in 128 bit registers.
 To implement the neural network code in the PAQ compressor I wrote vector dot
product code in MMX (4 x 16 bit for older processors) that is 6 times faster
than optimized C/C++.  There is an SSE2 version too.

> Actual difference in size would be 10 times, since your matrix is only
> 10% filled.

For a 64K by 64K matrix, each pointer is 16 bits, or 1.6 bits per element.  I
think for neural networks of that size you could use 1 bit weights.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71210692-be60c4