On Wednesday 14 March 2007 15:30, Eugen Leitl wrote:

> The reason Drexler proposed scaling down the Difference Engine is not
> because he considered them practical, but because they're easy to analyze.

But more to the point to put a LOWER bound on computational capacity of 
nanosystems.

> I'm not sure why you're looking at 10^30 ops btw, my number is some 10^23.

Ahh -- read you wrong; I thought you meant 1e17 sites with 1e23 ops apiece.

> The point is that we don't know how exactly the brain does it, so I put
> some reasonable numbers of hardware required to roughly track what a given
> biological system is doing. 

That makes you a scientist rather than an engineer in my book. Imagine that 
you looked at muscle cells and didn't know that the whole business was 
designed simply to exert a tension along its length. Hey, maybe it's a gland 
for generating lactic acid, or heating and deoxygenating blood. By the time 
you built a machine that could duplicate all the things a muscle *might* be 
doing when examined at the cellular level, you'd have filled a whole 
industrial complex. But the engineer says, let's build a gadget that pulls, 
and see if that's good enough. 

I'm willing to bet a few year's work on my guess as to what the brain is 
designed to accomplish.

> > Bottom line: HEPP for $1M today, $1K in a decade, but only if we have
> > understood and optimized the software.
>
> Do you think what your brain does (what is not require for housekeeping)
> is grossly inefficient, in the terms of operations, not comparisons to
> some semi-optimal computronium, and it can be optimized (by skipping
> all those NOPs, probably)? I'm not quite that confident, I must admit.

Actually it's quite efficient compared with current technology -- remember the 
full rack and 10 kW for current-day equivalence. It's not that the brain is 
lousy -- it's the most amazing machine that exists today. It's that 
technology is catching up with breathtaking speed. (Ops/watt has increased by 
a factor of TEN in the past year.)

> I'm however quite confident that there is no simple theory lurking in
> there which can written down as a neat set of equations on a sheet
> of paper, or even a small pile of such. So there's not much to understand,
> and very little to optimize.

I'll go out on a limb and conjecture that an AI can be fully described in less 
than a megabyte of the appropriate formalism. (Allow 10 MB if you want to 
implement the formalism in existing low-level languages.)

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to