Re: [agi] Reverse Engineering The Brain

2008-06-06 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 6/5/08, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:



There are two completely different types of project that seem to get
conflated in these discussions:

1) Copying the brain at the neural level, which is usually assumed
to be a 'blind' copy - in other words, we will not know how it
works, but will just do a complete copy and fire it up.

 
I suspect that we will have to learn a LOT more to be able to make 
something like this work, in part because we will need new theory in 
order to compute parameters that we cannot directly measure.
 
1.5) Combining scanned information with mathematical constraints to 
produce diagrams of "perfect" neurons, even though the precise 
parameters of the real-world neurons is not fully scannable.


What scanned information?  What mathematical constraints?  What 
'perfect' neurons?


The problem is that all of this requires you to do work to understand 
how the system is functioning, because you cannot do something like 
build a 'perfect' neuron unless you know what its functional role is, 
and to do that you need to go right up into the high-level description 
of the system  and that means, in the end, that you have to do the 
entire 'cognitive level' description of the brain *first*, then use it 
to understand how neurons are being used (what functional role they are 
playing).


For example:  does the precise morphology of the dendritic tree matter 
to the functioning of the neuron?  Do you need to scan this information 
in in complete detail?  I don't think you are going to be able to answer 
this question until after you have understood how the signals exchanged 
by neurons are being used (high level stuff).


Let me try to explain with an analogy.  You are duplicating a space 
shuttle without understanding how it works.  You want to know if you can 
use chewing gum for O-ring seals.  Chewing gum is great, although it 
does become hard and brittle and very brittle in cold weather... but 
since you do not know what functional role these O-ring seals are 
playing in the design of the whole system, you decide that maybe it is 
okay to use chewing gum.


So, I don't disagree that there could be a 1.5 approach, but I see now 
way that it is significantly different from the 2 approach.





2) Copying the design of the human brain at the cognitive level.
 This may involve a certain amount of neuroscience, but mostly it
will be at the cognitive system level, and could be done without
much reference to neurons at all.

 
The last 40 years of fruitless AI shows this to be pretty much of a dead 
end. There is simply too many questions that we don't even know enough 
to ask.


This is not true.  The last 40 years of AI have been almost completely 
unrelated to this 'cognitive' approach.  Over the years, the vast 
majority of AI researchers have subscribed to the following credo: "We 
intend to build an intelligent system, but although we might take some 
ideas or inspiration from how the human mind works, we feel no 
obligation to copy the human design because we believe that intelligence 
does not have to be done that way."


I was specifically drawing a distinction between two different ways to 
build an intelligence in a way that stays close to the human design. 
The regular AI approach is neither of these two.



2.5) First understanding how we think with neurons, program computers to 
perform the same or better directly, without reference to neurons or 
their equivalents.


This misses the point.  Cognitive level approaches do not have to reduce 
anything to neurons (at least, not in a significant way), so starting 
with understanding "how we think with neurons" doesn't make much sense. 
 If you leave out the specific reference to neurons, what you have is 
the cognitive level again.




Both of these ideas are very different from standard AI, but they
are also very different from one another.  The criticisms that can
be leveled against the neural-copy approach do not apply to the
cognitive approach, for example.

 
My more "real" 1.5 and 2.5 proposals require nearly the same levels of 
understanding, and ultimately lead to very similar results as 
"simulation" gives way via optimization to the same sort of code as 
direct AGI programming would utilize. In short, I suspect that both 
paths will ultimately lead to approximately the same final result. Sure 
we can argue about which path is best, but "easiest wins" usually rules.


You are not addressing the distinction that I made, though.


It is frustrating to see commentaries that drift back and forth
between these two.

My own position is that a cognitive-level copy is not just feasible
but well under way, whereas the idea of duplicating the neural level
is just a pie-in-the-sky fantasy at this point in time (it is not
possible with current or on-the-horizon technology, a

Re: [agi] Reverse Engineering The Brain

2008-06-06 Thread Richard Loosemore

J Storrs Hall, PhD wrote:
basically on the right track -- except there isn't just one "cognitive level". 
Are you thinking of working out the function of each topographically mapped 
area a la DNF? Each column in a Darwin machine a la Calvin? Conscious-level 
symbols a la Minsky?


Of course you can make finer distinctions, and different people use the 
term "cognitive" in different ways.  My usage of the term is coextensive 
with the usage in cognitive science and cognitive psychology, but that 
covers a multitude of sins.


To the extent that an approach tries to embrace what is known about 
human cognition it would be "cognitive", but if it took little notice of 
that, it would not.  Regular AI does not take much account of human 
cognition.  Neuroscience (even 'cognitive' or 'computational' 
neuroscience) takes a very superficial attitude toward all things 
cognitive, even when it says that it is doing otherwise (a sore point in 
the literature, right now).


But anything that takes significant account of cognition is very 
different from an approach that involves scanning a brain and trying to 
make a copy without understanding exactly how it works.  It is that 
enormous gap that I was pointing to, and the fact that there are many 
different ways of taking a significant account of cognition does not 
make much difference to that gap.




Richard Loosemore













On Thursday 05 June 2008 09:37:00 pm, Richard Loosemore wrote:
There seems to be a good deal of confusion (on this list and also over 
on the Singularity list) about what people actually mean when they talk 
about building an AGI by emulating or copying the brain.


There are two completely different types of project that seem to get 
conflated in these discussions:


1) Copying the brain at the neural level, which is usually assumed to be 
a 'blind' copy - in other words, we will not know how it works, but will 
just do a complete copy and fire it up.


2) Copying the design of the human brain at the cognitive level.  This 
may involve a certain amount of neuroscience, but mostly it will be at 
the cognitive system level, and could be done without much reference to 
neurons at all.



Both of these ideas are very different from standard AI, but they are 
also very different from one another.  The criticisms that can be 
leveled against the neural-copy approach do not apply to the cognitive 
approach, for example.


It is frustrating to see commentaries that drift back and forth between 
these two.


My own position is that a cognitive-level copy is not just feasible but 
well under way, whereas the idea of duplicating the neural level is just 
a pie-in-the-sky fantasy at this point in time (it is not possible with 
current or on-the-horizon technology, and will probably not be possible 
until after we invent an AGI by some other means and get it to design, 
build and control a nanotech brain scanning machine).


Duplicating a system as complex as that *without* first understanding it 
at the functional level seems pure folly:  one small error in the 
mapping and the result could be something that simply does not work ... 
and then, faced with a brain-copy that needs debugging, what would we 
do?  The best we could do is start another scan and hope for better luck 
next time.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread Steve Richfield
Richard,

On 6/5/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

>
> There are two completely different types of project that seem to get
> conflated in these discussions:
>
> 1) Copying the brain at the neural level, which is usually assumed to be a
> 'blind' copy - in other words, we will not know how it works, but will just
> do a complete copy and fire it up.


I suspect that we will have to learn a LOT more to be able to make something
like this work, in part because we will need new theory in order to compute
parameters that we cannot directly measure.

1.5) Combining scanned information with mathematical constraints to produce
diagrams of "perfect" neurons, even though the precise parameters of the
real-world neurons is not fully scannable.



> 2) Copying the design of the human brain at the cognitive level.  This may
> involve a certain amount of neuroscience, but mostly it will be at the
> cognitive system level, and could be done without much reference to neurons
> at all.


The last 40 years of fruitless AI shows this to be pretty much of a dead
end. There is simply too many questions that we don't even know enough to
ask.

2.5) First understanding how we think with neurons, program computers to
perform the same or better directly, without reference to neurons or their
equivalents.

> Both of these ideas are very different from standard AI, but they are also
> very different from one another.  The criticisms that can be leveled against
> the neural-copy approach do not apply to the cognitive approach, for
> example.


My more "real" 1.5 and 2.5 proposals require nearly the same levels of
understanding, and ultimately lead to very similar results as "simulation"
gives way via optimization to the same sort of code as direct AGI
programming would utilize. In short, I suspect that both paths will
ultimately lead to approximately the same final result. Sure we can argue
about which path is best, but "easiest wins" usually rules.

It is frustrating to see commentaries that drift back and forth between
> these two.
>
> My own position is that a cognitive-level copy is not just feasible but
> well under way, whereas the idea of duplicating the neural level is just a
> pie-in-the-sky fantasy at this point in time (it is not possible with
> current or on-the-horizon technology, and will probably not be possible
> until after we invent an AGI by some other means and get it to design, build
> and control a nanotech brain scanning machine).


There is nothing in the above sentence that I can agree with, from which to
state objections to the remainder! Some of it may turn out to be correct,
but too little is known and no one is even building the needed lab equipment
to determine just WHAT the situation actually is. However, I believe that
the whole "thinking" thing involves processes that no one here will EVER
guess without learning more about biological brains - if nothing more than
the mathematics of operation. However, your next paragraph asks some of the
right questions, showing that sometimes it is possible to get to the correct
place, even though the path to there is severely flawed.

Duplicating a system as complex as that *without* first understanding it at
> the functional level seems pure folly:


I absolutely agree. So long as there is any sort of "unknown mathematics"
there is no hope.

one small error in the mapping and the result could be something that simply
> does not work ...


No, these MUST be correctable. SEM methods are unworkable because of the
high "disaster rate" as slices are often destroyed. However, my scanning UV
fluorescence microscope doesn't have such problems because the scanning is
all within unsliced bulk brain, then some is sliced off and discarded and
scanning within the unsliced bulk brain continues.

Further, there will doubtless be parameters that evade scanning. SEM methods
trash the complex molecules that underlie neural function, and so have no
hope of success. However, even the UV fluorescence methods may prove to be
inadequate to extract everything needed. Hence IMHO there will have to be
lots of "fudging" as the scanner figures out what must have been there to
make it all work. This will obviously require better mathematics than we now
have.

and then, faced with a brain-copy that needs debugging, what would we do?


Debugging wetware is much the same as debugging software, only wetware is
MUCH more forgiving of errors, since neurons routinely die at a horrendous
rate even in "healthy" people.

The best we could do is start another scan and hope for better luck next
> time.


You can NOT rescan. You MUST get it right the first time. Even genetically
identical twins raised together have very different brains when you look at
the (visible light) microscopic details - as a half-century-old experiment
on identical twin lab mice showed.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Fe

Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
basically on the right track -- except there isn't just one "cognitive level". 
Are you thinking of working out the function of each topographically mapped 
area a la DNF? Each column in a Darwin machine a la Calvin? Conscious-level 
symbols a la Minsky?

On Thursday 05 June 2008 09:37:00 pm, Richard Loosemore wrote:
> 
> There seems to be a good deal of confusion (on this list and also over 
> on the Singularity list) about what people actually mean when they talk 
> about building an AGI by emulating or copying the brain.
> 
> There are two completely different types of project that seem to get 
> conflated in these discussions:
> 
> 1) Copying the brain at the neural level, which is usually assumed to be 
> a 'blind' copy - in other words, we will not know how it works, but will 
> just do a complete copy and fire it up.
> 
> 2) Copying the design of the human brain at the cognitive level.  This 
> may involve a certain amount of neuroscience, but mostly it will be at 
> the cognitive system level, and could be done without much reference to 
> neurons at all.
> 
> 
> Both of these ideas are very different from standard AI, but they are 
> also very different from one another.  The criticisms that can be 
> leveled against the neural-copy approach do not apply to the cognitive 
> approach, for example.
> 
> It is frustrating to see commentaries that drift back and forth between 
> these two.
> 
> My own position is that a cognitive-level copy is not just feasible but 
> well under way, whereas the idea of duplicating the neural level is just 
> a pie-in-the-sky fantasy at this point in time (it is not possible with 
> current or on-the-horizon technology, and will probably not be possible 
> until after we invent an AGI by some other means and get it to design, 
> build and control a nanotech brain scanning machine).
> 
> Duplicating a system as complex as that *without* first understanding it 
> at the functional level seems pure folly:  one small error in the 
> mapping and the result could be something that simply does not work ... 
> and then, faced with a brain-copy that needs debugging, what would we 
> do?  The best we could do is start another scan and hope for better luck 
> next time.
> 
> 
> 
> 
> 
> Richard Loosemore
> 
> 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread Richard Loosemore


There seems to be a good deal of confusion (on this list and also over 
on the Singularity list) about what people actually mean when they talk 
about building an AGI by emulating or copying the brain.


There are two completely different types of project that seem to get 
conflated in these discussions:


1) Copying the brain at the neural level, which is usually assumed to be 
a 'blind' copy - in other words, we will not know how it works, but will 
just do a complete copy and fire it up.


2) Copying the design of the human brain at the cognitive level.  This 
may involve a certain amount of neuroscience, but mostly it will be at 
the cognitive system level, and could be done without much reference to 
neurons at all.



Both of these ideas are very different from standard AI, but they are 
also very different from one another.  The criticisms that can be 
leveled against the neural-copy approach do not apply to the cognitive 
approach, for example.


It is frustrating to see commentaries that drift back and forth between 
these two.


My own position is that a cognitive-level copy is not just feasible but 
well under way, whereas the idea of duplicating the neural level is just 
a pie-in-the-sky fantasy at this point in time (it is not possible with 
current or on-the-horizon technology, and will probably not be possible 
until after we invent an AGI by some other means and get it to design, 
build and control a nanotech brain scanning machine).


Duplicating a system as complex as that *without* first understanding it 
at the functional level seems pure folly:  one small error in the 
mapping and the result could be something that simply does not work ... 
and then, faced with a brain-copy that needs debugging, what would we 
do?  The best we could do is start another scan and hope for better luck 
next time.






Richard Loosemore




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Reverse Engineering The Brain

2008-06-05 Thread Ed Porter
Before we spend the money required to reverse engineer the brain --- we
should at least spend the much less amount of money necessary to explore the
very promising potential of Novamente-like machines running on the
equivalent of about 20 million dollars worth of hardware at today's prices.

-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 05, 2008 5:01 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Reverse Engineering The Brain

Or, assuming we decided to spend the same on that as on the Iraq war ($1 
trillion: 
http://www.boston.com/news/nation/articles/2007/08/01/analysis_says_war_coul
d_cost_1_trillion/), 
at $1 million per scope and associated lab costs, giving a million scopes
==> 10^5 sec = 28 hours.

Which is more important?

On Thursday 05 June 2008 03:44:14 pm, Matt Mahoney wrote:
> --- On Thu, 6/5/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> 
> > http://www.spectrum.ieee.org/print/6268
> 
> Some rough calculations.  A human brain has a volume of 10^24 nm^3.  A
scan 
of 5 x 5 x 50 nm voxels requires about 1000 exabytes = 10^21 bytes of
storage 
(1 MB per synapse).  A scan would take a 10 GHz SEM 10^11 seconds = 3000 
years, or equivalently, 1 year for 3000 scanning electron microscopes
running 
in parallel.
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> 
> 
> 
> 
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: 
http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
> 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Reverse Engineering The Brain

2008-06-05 Thread Ed Porter
A very interesting paper.  

I am glad they are talking in terms of understanding consciousness by
reverse engineering the brain.  It supports my belief that consciousness
results from and is an essential aspect of the type of computation a human
mind does.

-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 05, 2008 3:07 PM
To: agi@v2.listbox.com
Subject: [agi] Reverse Engineering The Brain

http://www.spectrum.ieee.org/print/6268


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
Or, assuming we decided to spend the same on that as on the Iraq war ($1 
trillion: 
http://www.boston.com/news/nation/articles/2007/08/01/analysis_says_war_could_cost_1_trillion/),
 
at $1 million per scope and associated lab costs, giving a million scopes
==> 10^5 sec = 28 hours.

Which is more important?

On Thursday 05 June 2008 03:44:14 pm, Matt Mahoney wrote:
> --- On Thu, 6/5/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> 
> > http://www.spectrum.ieee.org/print/6268
> 
> Some rough calculations.  A human brain has a volume of 10^24 nm^3.  A scan 
of 5 x 5 x 50 nm voxels requires about 1000 exabytes = 10^21 bytes of storage 
(1 MB per synapse).  A scan would take a 10 GHz SEM 10^11 seconds = 3000 
years, or equivalently, 1 year for 3000 scanning electron microscopes running 
in parallel.
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> 
> 
> 
> 
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: 
http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
> 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread Matt Mahoney
--- On Thu, 6/5/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:

> http://www.spectrum.ieee.org/print/6268

Some rough calculations.  A human brain has a volume of 10^24 nm^3.  A scan of 
5 x 5 x 50 nm voxels requires about 1000 exabytes = 10^21 bytes of storage (1 
MB per synapse).  A scan would take a 10 GHz SEM 10^11 seconds = 3000 years, or 
equivalently, 1 year for 3000 scanning electron microscopes running in parallel.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com