I think the ratio of processing power to memory to bandwidth is just about 
right for AGI. Processing power and memory increase at about the same rate 
under Moore's Law. The time it takes a modern computer to clear all of its 
memory is on the same order as the response time as a neuron, and this has not 
changed much since ENIAC and the Commodore 64. It would seem easier to increase 
processing density than memory density but we are constrained by power 
consumption, heat dissipation, network bandwidth, and the lack of software and 
algorithms for parallel computation.

Bandwidth is about right too. A modern PC can simulate about 1 mm^3 of brain 
tissue with 10^9 synapses at 0.1 ms resolution or so. Nerve fibers have a 
diameter around 1 or 2 microns, so a 1 mm cube would have about 10^6 of these 
transmitting 10 bits per second, or 10 Mb/s. Similar calculations for larger 
cubes show locality with bandwidth growing at O(n^2/3). This could be handled 
by an Ethernet cluster with a high speed core using off the shelf hardware.

I don't know if it is coincidence that these 3 technologies are in the right 
ratio, or if it driven by the needs of software that compliment the human mind.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Thu, 6/12/08, Derek Zahn <[EMAIL PROTECTED]> wrote:
From: Derek Zahn <[EMAIL PROTECTED]>
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
To: agi@v2.listbox.com
Date: Thursday, June 12, 2008, 11:36 AM

Two things I think are interesting about these trends in high-performance 
commodity hardware:

1) The "flops/bit" ratio (processing power vs memory) is skyrocketing.  The 
move to parallel architectures makes the number of high-level "operations" per 
transistor go up, but bits of memory per transistor in large memory circuits 
doesn't go up.  The old "bit per op/s" or "byte per op/s" rules of thumb get 
really broken on things like Tesla (0.03 bit/flops).  Of course we don't know 
the ratio needed for de novo AGI or brain modeling, but the assumptions about 
processing vs memory certainly seem to be changing.

2) Much more than previously, effective utilization of processor operations 
requires incredibly high locality (processing cores only have immediate access 
to very small memories).  This is also referred to as "arithmetic intensity".  
This of course is because parallelism causes "operations per second" to expand 
much faster than methods for increasing memory bandwidth to large banks.  
Perhaps future 3D layering techniques will help with this problem, but for now 
AGI paradigms hoping to cache in (yuk yuk) on these hyperincreases in FLOPS 
need to be geared to high arithmetic intensity.

Interestingly (to me), these two things both imply to me that we get to 
increase the complexity of neuron and synapse models beyond the "muladd/synapse 
+ simple activation function" model with essentially no degradation in 
performance since the bandwidth of propagating values between neurons is the 
bottleneck much more than local processing inside the neuron model.




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to