[agi] Forget talk to the animals. Talk directly to their cells.

2008-06-06 Thread Brad Paulsen

All,

Not specifically AGI-related, but too interesting not to pass along, so:

SWEET NOTHINGS: ARTIFICIAL VESICLES AND BACTERIAL CELLS COMMUNICATE BY WAY OF SUGAR COMPONENTS 
http://www.physorg.com/news131883741.html 



Cheers,

Brad



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-06 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 6/5/08, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:



There are two completely different types of project that seem to get
conflated in these discussions:

1) Copying the brain at the neural level, which is usually assumed
to be a 'blind' copy - in other words, we will not know how it
works, but will just do a complete copy and fire it up.

 
I suspect that we will have to learn a LOT more to be able to make 
something like this work, in part because we will need new theory in 
order to compute parameters that we cannot directly measure.
 
1.5) Combining scanned information with mathematical constraints to 
produce diagrams of "perfect" neurons, even though the precise 
parameters of the real-world neurons is not fully scannable.


What scanned information?  What mathematical constraints?  What 
'perfect' neurons?


The problem is that all of this requires you to do work to understand 
how the system is functioning, because you cannot do something like 
build a 'perfect' neuron unless you know what its functional role is, 
and to do that you need to go right up into the high-level description 
of the system  and that means, in the end, that you have to do the 
entire 'cognitive level' description of the brain *first*, then use it 
to understand how neurons are being used (what functional role they are 
playing).


For example:  does the precise morphology of the dendritic tree matter 
to the functioning of the neuron?  Do you need to scan this information 
in in complete detail?  I don't think you are going to be able to answer 
this question until after you have understood how the signals exchanged 
by neurons are being used (high level stuff).


Let me try to explain with an analogy.  You are duplicating a space 
shuttle without understanding how it works.  You want to know if you can 
use chewing gum for O-ring seals.  Chewing gum is great, although it 
does become hard and brittle and very brittle in cold weather... but 
since you do not know what functional role these O-ring seals are 
playing in the design of the whole system, you decide that maybe it is 
okay to use chewing gum.


So, I don't disagree that there could be a 1.5 approach, but I see now 
way that it is significantly different from the 2 approach.





2) Copying the design of the human brain at the cognitive level.
 This may involve a certain amount of neuroscience, but mostly it
will be at the cognitive system level, and could be done without
much reference to neurons at all.

 
The last 40 years of fruitless AI shows this to be pretty much of a dead 
end. There is simply too many questions that we don't even know enough 
to ask.


This is not true.  The last 40 years of AI have been almost completely 
unrelated to this 'cognitive' approach.  Over the years, the vast 
majority of AI researchers have subscribed to the following credo: "We 
intend to build an intelligent system, but although we might take some 
ideas or inspiration from how the human mind works, we feel no 
obligation to copy the human design because we believe that intelligence 
does not have to be done that way."


I was specifically drawing a distinction between two different ways to 
build an intelligence in a way that stays close to the human design. 
The regular AI approach is neither of these two.



2.5) First understanding how we think with neurons, program computers to 
perform the same or better directly, without reference to neurons or 
their equivalents.


This misses the point.  Cognitive level approaches do not have to reduce 
anything to neurons (at least, not in a significant way), so starting 
with understanding "how we think with neurons" doesn't make much sense. 
 If you leave out the specific reference to neurons, what you have is 
the cognitive level again.




Both of these ideas are very different from standard AI, but they
are also very different from one another.  The criticisms that can
be leveled against the neural-copy approach do not apply to the
cognitive approach, for example.

 
My more "real" 1.5 and 2.5 proposals require nearly the same levels of 
understanding, and ultimately lead to very similar results as 
"simulation" gives way via optimization to the same sort of code as 
direct AGI programming would utilize. In short, I suspect that both 
paths will ultimately lead to approximately the same final result. Sure 
we can argue about which path is best, but "easiest wins" usually rules.


You are not addressing the distinction that I made, though.


It is frustrating to see commentaries that drift back and forth
between these two.

My own position is that a cognitive-level copy is not just feasible
but well under way, whereas the idea of duplicating the neural level
is just a pie-in-the-sky fantasy at this point in time (it is not
possible with current or on-the-horizon technology, a

Re: [agi] Reverse Engineering The Brain

2008-06-06 Thread Richard Loosemore

J Storrs Hall, PhD wrote:
basically on the right track -- except there isn't just one "cognitive level". 
Are you thinking of working out the function of each topographically mapped 
area a la DNF? Each column in a Darwin machine a la Calvin? Conscious-level 
symbols a la Minsky?


Of course you can make finer distinctions, and different people use the 
term "cognitive" in different ways.  My usage of the term is coextensive 
with the usage in cognitive science and cognitive psychology, but that 
covers a multitude of sins.


To the extent that an approach tries to embrace what is known about 
human cognition it would be "cognitive", but if it took little notice of 
that, it would not.  Regular AI does not take much account of human 
cognition.  Neuroscience (even 'cognitive' or 'computational' 
neuroscience) takes a very superficial attitude toward all things 
cognitive, even when it says that it is doing otherwise (a sore point in 
the literature, right now).


But anything that takes significant account of cognition is very 
different from an approach that involves scanning a brain and trying to 
make a copy without understanding exactly how it works.  It is that 
enormous gap that I was pointing to, and the fact that there are many 
different ways of taking a significant account of cognition does not 
make much difference to that gap.




Richard Loosemore













On Thursday 05 June 2008 09:37:00 pm, Richard Loosemore wrote:
There seems to be a good deal of confusion (on this list and also over 
on the Singularity list) about what people actually mean when they talk 
about building an AGI by emulating or copying the brain.


There are two completely different types of project that seem to get 
conflated in these discussions:


1) Copying the brain at the neural level, which is usually assumed to be 
a 'blind' copy - in other words, we will not know how it works, but will 
just do a complete copy and fire it up.


2) Copying the design of the human brain at the cognitive level.  This 
may involve a certain amount of neuroscience, but mostly it will be at 
the cognitive system level, and could be done without much reference to 
neurons at all.



Both of these ideas are very different from standard AI, but they are 
also very different from one another.  The criticisms that can be 
leveled against the neural-copy approach do not apply to the cognitive 
approach, for example.


It is frustrating to see commentaries that drift back and forth between 
these two.


My own position is that a cognitive-level copy is not just feasible but 
well under way, whereas the idea of duplicating the neural level is just 
a pie-in-the-sky fantasy at this point in time (it is not possible with 
current or on-the-horizon technology, and will probably not be possible 
until after we invent an AGI by some other means and get it to design, 
build and control a nanotech brain scanning machine).


Duplicating a system as complex as that *without* first understanding it 
at the functional level seems pure folly:  one small error in the 
mapping and the result could be something that simply does not work ... 
and then, faced with a brain-copy that needs debugging, what would we 
do?  The best we could do is start another scan and hope for better luck 
next time.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] So the question is..

2008-06-06 Thread Mike Tintner

Here's the v. impressive thought-controlled Dean Kamen robotic arm:

http://blog.wired.com/gadgets/2008/05/dean-kamens-rob.html

The question is - given our recent discussion on the validity of experiments 
showing how words activated appropriate physical movement areas in the 
brain - how exactly do these brain-machine interfaces work? What signals are 
being picked up that enable these robot arms and hands to form precise 
shapes, and curl around balls, say? The two areas of "embodied mind" 
neuroscience and practical BMI work seem deeply related - and perhaps to 
touch on things like image/body schemas. Anyone know of any good links for 
further research here?






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com