Eugen,

> So you're engaging in a critique of a field you know very
> little about.

If you can't express yourself without gratuitous sarcasm and allegations like the above, you'll just be ignored.

In fact, you misunderstood pretty much everything I tried to say, so it would have been a huge amount of work for me to sort out the mess. I suppose I should thank you for being so rude that I don't need to bother. ;-)

Richard Loosemore.

















Eugen Leitl wrote:
On Mon, Jan 22, 2007 at 01:11:57PM -0500, Richard Loosemore wrote:

This debate about the relative merits of the AGI and the Brain Emulation methods of building an intelligence seems confused to me.

What is the Brain Emulation method? Are you talking about computational
neuroscience, or something?
What exactly is meant by a "brain emulation" route anyway?

I'm not entirely sure (I haven't read it all yet), but the very beginning of this post strikes me as a desperate search for a strawman to demolish.
Is it:

A) Copy the exact structure and functioning of the brain's hardware, and along the way get a precise understanding of the functional architecture of the human brain, at all the various levels at which such an architecture needs to be understood.

or

B) Copy the exact structure and functioning of the brain's hardware, but ignore the architecture.

?

Why do you think these are mutually exclusive alternatives? What makes
you think there is such a thing as architecture in the human sense sitting
in there for you to copy a blueprint from?
An illustration of the difference: You know nothing about electronics, but you get hold of an extremely complex radio, and want to build one by exactly "emulating" your example. Do you try to do your emulation

Um, wrong comparison. CNS doesn't require any new physics. Some approaches
start with atomically accurate models of compartments, which allows you
to reach down to arbitrary low level of theory in order to fetch missing
parameters. That's bottom up. Simultaneously, you have top-down empirical
data from neuron and tissue activity. You can use both to eliminate
the large but shrinking amount of unknown in the middle.

without ever trying to understand the functions of transistors? The

Do you think that an atomically accurate copy of a radio wouldn't work?

functions of all the various hardware components? The general idea of transmission of radio signals? The modular structure of the radio set,

But the brain is not a radio set. Specifically, it's not a human-designed
artifact, and has different signatures.

with its tuning, frequency multiplexing, amplitude demodulation and other circuits? Do you ignore the functioning of the radio with respect to the humans who use it? The existence and distribution of radio signal sources?

I don't understand your last two sentences. (In fact, I was going huh?
at a rate of about twice every sentence so far, but deconstructing your post
at this level would do no good so I won't).
You could decide to care about all that stuff - that would be Route A - or you could ignore it and just emulate the thing by brute force, cubic micrometer by cubic micrometer - that would be Route B.

Of course some people do A, and some do B, and several others go for C and D.
I presume that the brain emulation community is not being so daft as to try B ........ but honestly, when I read people talking about this, they

Actually, it is not at all daft to model a cubic micron or so of biology
from first principles, if you can extract nonobservable parameters (such
as a switching behaviour of a particular ion channel type, for instance)
from a MD level simulation. Have you ever considered how to write a learning simulation that ascends, by incrementally building upper abstraction
layers, and co-evolving hardware representation as it goes along?
It's certainly demanding, but not nearly as demanding as a full-blown AGI by explicit coding.

often seem to be assuming a black and white division between A and B, and more often than not they ARE assuming that what "brain emulation" means is B - the dumb brute force method.

Maybe you're reading the wrong people. Or, misunderstand what they say.
I have to say that if B is what is meant, the idea seems insane. You only need to get one little transistor junction out of place in your simulation of the radio, and the entire thing might not work ... and if you know nothing about the functionality, you are up the proverbial creek. Ditto for the brain.

The brain is not a radio. It's designed to work in a noisy environment, so
it's autohomeostating. You don't have to tune the oscillator precision
down to ppb levels in order for it to work, or break down horribly.
How many errors can you afford to make before the brain simulation becomes just as useless as a broken radio? The point is WHO KNOWS?! It

Of course injecting errors into the simulation and look at trajectory
spread is a common technique, so perhaps someone does know, after all.

is funny that this is so little appreciated. For example, the B.E. people could slave away on their data collection, and then at the last minute realize that they also needed detailed information about the spatial distribution of every single dendritic bouton on every neuron

How about submolecular resolution, on parts of specific samples? It
might be useful to sample the ion channel population, and maybe even
their degree of phosphorylation to obtain parameters you won't see
from your garden-variety EM micrograph.

.... but that detail turns out to be one order of magnitude beyond what any imaginable science can deliver. Who knows if this is an issue,

Do you realize that cryo AFM has had atomic resolution for a while now?

without a detailed functional understanding of the brain.

But if B is not the intended route, then it must be some variety of A. Which then begs the question: how far toward A are they supposed to be going? Everything in these arguments about AGI vs Brain Emulation depends on exactly how far the B.E. people are going to go toward understanding functionality.

Ain't breaking down self-erected strawmen fun?
If they go the whole way - basically using B.E. as a set of clues about how to do AGI - all they are doing is AGI *plus* a bunch of brain sleuthing. Sure, the neuron maps might help. But they will have to be

Do you have an idea what a rich source of design constraints in
such a difficult field is worth?
just as smart about their AGI models as they are about their neuron maps. You cannot understand the functional architecture of the brain

You seem to think there's something nice and modular and shiny sitting in there,
just waiting for the right person to be picked up. Sure, that would be
nice. Some modularity will be there. But, this stuff hasn't been
designed for easy of human analysis as a fitness function component.
without having a general understanding of the same kinds of things that AGI/Cognitive Science people have to know. Which makes the B.E. approach anything but an alternative to AGI. They will have to know all about the information processing systems in the human mind, and probably also about the general subject of [different kinds of intelligent information processing systems], which is another way of refering to AGI/Cognitive Science.

I don't see how that follows.
Now, let's finish by asking what the neuroscience people are actually doing in practice, right now. Are they trying build sophisticated

So you're engaging in a critique of a field you know very little about.

models of neural functionality, understanding not just the low-level signal transmission but the many, many layers of structure on top of that bottom level?

Is that a rhetorical question?
I would say: no! First, they have a habit of making diabolically

It seems it was.

simplistic statements about the relationship between circuits and function ("Brain Scientists Discover the Brain Region That Determines Altruism / Musical Tastes / Potty Training Ability / Whether You Like Blondes!"). Second, when you look at the theoretical structures they are using to build their higher level functional understanding of the brain systems, what do we find?....... a resurgence of interest in "reinforcement learning", which is an idea that was thrown out by the cognitive science community decades ago because it was stupidly naive.

I hope I didn't come over in my critique of strong AI as you're in
thinking neuroscience is a crock of strong fertilizer.
In general, I am amazed at the naivete and arrogance of neuroscience folks when it comes to cognitive science. Not all, but an alarming number of them. (The same criticism can be applied to narrow AI people, but that is a different story).

I'm not sure we're getting anywhere in all those mutual tar brush strokes.
I would like to read up on the differences of novel approaches discussed
here. I'm looking at a several online papers right now, but would really
like pointers to succinct comparisions of what's new, and how the new is
better than the old.

Brain Emulation is just the latest hype-driven bandwagon. It will come

You've just built a brand new strawman, and after demolishing it,
are complaining that the straw smells fresh.

and go like Expert Systems, The Fifth Generation Project and (Naive) Neural Networks.

Or not http://faculty.washington.edu/chudler/hist.html
http://www.stottlerhenke.com/ai_general/history.htm


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to