On Wednesday 14 March 2007 08:05, Eugen Leitl wrote:
> You might find the authors have a bit more credibility than
> Moravec, and especially such a notorious luminary like Kurzweil
> http://www.kurzweiltech.com/aboutray.html

Besides writing books, Kurzweil builds systems that work.

> I'm not actually just being flippant, the AI crowd has a rather
> bad case of number creep as far as estimates are concerned.
> You can assume 10^23 ops/s on 10^17 sites didn't come out of the
> brown, er, blue.

I find it completely ridiculous. 1e30 ops total works out to 1000 cubic meters 
of Drexler nanocomputers. Knowing the relative power density of bio to 
engineered nanosystems as I (and you) do, I claim that there's no way the 
brain can be doing anywhere close to the actual, applied calculation of even 
its own volume in nanocomputers, much less a million times as much.

I would confidently undertake to simulate the brain at a level where each cell 
is a system of ODE's with 1e30 ops -- give me 1e36 and I'll give you 
molecular dynamics. I think the neuroscientists are confusing what it takes 
to simulate a system and the the amount of useful work it performs.

Using a ratio of 1e6 for power density/speed of a nanoengineered system to 
bio, which is fairly low for some of the mechanical designs, the brain should 
be doing about as much computation as a microliter of nanocomputers, or about 
1e18 ops. Given the error bars for this estimation method, i.e. several 
orders of magnitude, I'd say this matches Kurzweil's number fine.

> > 'Fraid I left my gigacluster in my other pants today.
>
> What if you're going to need it? Seriously, with ~20 GByte/s
> memory bandwidth you won't get a lot of refreshes/s on your
> few GBytes.

I'll just have to wait till those damned lab rats get nanotech working. That 
microliter machine (= 1 cubic millimeter) should have ~1e18 B/s memory 
bandwidth. If that won't do it we can vary the design to use CAM and get 1e24 
compares per second. But I doubt we'll need it.

Back to the present: "Amdahl's Rule of Thumb" puts memory size and bandwidth 
equal to ops per second for conventional computer systems. I conjecture that 
an AI-optimized system may need to be processor-heavy by a factor of 10, i.e. 
be able to look at every word in memory in 100 ms, while still being able to 
overlay memory from disk in 1 sec. We're looking at needing memory the size 
of a very average database, but in RAM. 

Bottom line: HEPP for $1M today, $1K in a decade, but only if we have 
understood and optimized the software.

Let's get to work.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to