Josh,

On 5/30/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:

> You'd get a hell of a lot better resolution with an e-beam blowing up
> nanometer-sized spots, and feeding the ejecta thru a mass spectrometer.


Yes, but all your spectrometer will see is hydrogen, oxygen, and carbon. The
e-beam will destroy the very complex molecules that make neurons do what
they do, whereas their fluorescence can be observed at UV resolution, even
in living tissue.

Further, e-beams are WAY too slow. It would take more than a lifetime to
scan out an entire brain, because each spot must be done separately and the
particles flushed out before the next spot is done, whereas the scanning UV
fluorescence microscope can do millions of isolated spots at a time.

Note the following complexity constant:

Taking a computer capable of simulating a brain, neuron by neuron, synapse
by synapse, in real time, and instead loading it with the software needed to
reconstruct the logical diagram of a brain given the images from a scanning
UV fluorescence microscope, e-beam image, etc., it will take the SAME amount
of time to reconstruct the logical diagram, regardless of the complexity of
the brain. Why? A brain that is twice as complex will need twice as much
computing power to simulate it, and with twice the computing power, the
computer can reconstruct twice as complex a brain in the same amount of
time. Hence, this constant is in units of time. While it is WAY too early to
make any accurate estimate of this constant, it may not be too early to
guess its order of magnitude. My own estimates are on the order of one
month. Of course, a mass production operation could potentially combine
several such computers to accomplish this task in a shorter amount of time.
I hereby dub this constant "Richfield's Constant". In any case, when it
becomes possible to diagram brains, it will NOT take all that long to get
the first diagrams.



> See my talk a couple of years back at Alcor.


It's good to see our participants are making it around to other forums.



> But I would suggest that this is
> *waaaay* off-topic for this list...


Is it? I guess the REAL question is whether our "prime directive" here is to
1.  create and/or evaluate super intelligences, regardless of what may be
needed to achieve this goal (which seems the most logical to me), or
2.  hack away at code as AI has been doing for the last 40 years, in the
hopes that a breakthrough will be forthcoming without much new information
(which seems to be a nearly hopeless venture, and one that no one will even
appreciate when the scanning information DOES become available)?

This all reminds me of one mathematician's goal of estimating pi by dropping
a needle onto a hardwood floor having slats the same width of the needle,
and keeping track of the frequency of the needle crossing a crack. He did
this many thousands of times, during which time other better ways of
calculating pi were developed by other mathematicians. His estimate was
about as good as he expected it would be, but was of absolutely no use to
anyone other than as a historical note. I suspect that present pre-scanning
AGI efforts will suffer exactly the same fate, and for pretty much the same
reasons.


> uploading implications to the contrary
> notwithstanding.


???

Steve Richfield



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to