[agi] About the brain-emulation route to AGI

2007-01-22 Thread Richard Loosemore
This debate about the relative merits of the AGI and the Brain Emulation 
methods of building an intelligence seems confused to me.


What exactly is meant by a "brain emulation" route anyway?

Is it:

A) Copy the exact structure and functioning of the brain's hardware, and 
along the way get a precise understanding of the functional architecture 
of the human brain, at all the various levels at which such an 
architecture needs to be understood.


or

B) Copy the exact structure and functioning of the brain's hardware, but 
ignore the architecture.


?

An illustration of the difference:  You know nothing about electronics, 
but you get hold of an extremely complex radio, and want to build one by 
exactly "emulating" your example.  Do you try to do your emulation 
without ever trying to understand the functions of transistors?  The 
functions of all the various hardware components?  The general idea of 
transmission of radio signals?  The modular structure of the radio set, 
with its tuning, frequency multiplexing, amplitude demodulation and 
other circuits?  Do you ignore the functioning of the radio with respect 
to the humans who use it?  The existence and distribution of radio 
signal sources?


You could decide to care about all that stuff - that would be Route A - 
or you could ignore it and just emulate the thing by brute force, cubic 
micrometer by cubic micrometer - that would be Route B.



I presume that the brain emulation community is not being so daft as to 
try B  but honestly, when I read people talking about this, they 
often seem to be assuming a black and white division between A and B, 
and more often than not they ARE assuming that what "brain emulation" 
means is B - the dumb brute force method.


I have to say that if B is what is meant, the idea seems insane.  You 
only need to get one little transistor junction out of place in your 
simulation of the radio, and the entire thing might not work ... and if 
you know nothing about the functionality, you are up the proverbial 
creek.  Ditto for the brain.


How many errors can you afford to make before the brain simulation 
becomes just as useless as a broken radio?  The point is WHO KNOWS?!  It 
is funny that this is so little appreciated.  For example, the B.E. 
people could slave away on their data collection, and then at the last 
minute realize that they also needed detailed information about the 
spatial distribution of every single dendritic bouton on every neuron 
 but that detail turns out to be one order of magnitude beyond what 
any imaginable science can deliver.  Who knows if this is an issue, 
without a detailed functional understanding of the brain.


But if B is not the intended route, then it must be some variety of A. 
Which then begs the question:  how far toward A are they supposed to be 
going?  Everything in these arguments about AGI vs Brain Emulation 
depends on exactly how far the B.E. people are going to go toward 
understanding functionality.


If they go the whole way - basically using B.E. as a set of clues about 
how to do AGI - all they are doing is AGI *plus* a bunch of brain 
sleuthing.  Sure, the neuron maps might help.  But they will have to be 
just as smart about their AGI models as they are about their neuron 
maps.  You cannot understand the functional architecture of the brain 
without having a general understanding of the same kinds of things that 
AGI/Cognitive Science people have to know.  Which makes the B.E. 
approach anything but an alternative to AGI.  They will have to know all 
about the information processing systems in the human mind, and probably 
also about the general subject of [different kinds of intelligent 
information processing systems], which is another way of refering to 
AGI/Cognitive Science.


Now, let's finish by asking what the neuroscience people are actually 
doing in practice, right now.  Are they trying build sophisticated 
models of neural functionality, understanding not just the low-level 
signal transmission but the many, many layers of structure on top of 
that bottom level?


I would say:  no!  First, they have a habit of making diabolically 
simplistic statements about the relationship between circuits and 
function ("Brain Scientists Discover the Brain Region That Determines 
Altruism / Musical Tastes / Potty Training Ability / Whether You Like 
Blondes!").  Second, when you look at the theoretical structures they 
are using to build their higher level functional understanding of the 
brain systems, what do we find?... a resurgence of interest in 
"reinforcement learning", which is an idea that was thrown out by the 
cognitive science community decades ago because it was stupidly naive.


In general, I am amazed at the naivete and arrogance of neuroscience 
folks when it comes to cognitive science.  Not all, but an alarming 
number of them.  (The same criticism can be applied to narrow AI people, 
but that is a different story).




Brain Emu

Re: [agi] (video)The Future of Cognitive Computing

2007-01-22 Thread Benjamin Goertzel

 BTW: I'm a high school senior, I'm working on my own AGI design, and this
is my first post here on the list, I've been watching the list for about two
months and finally decided to contribute :) Nice to meet you all!


I appreciate your ambition ;-) ...

If you want to discuss the details of your AGI design, feel free to do
so either on-list or send me a private email...

I made my first AI design at age 16, in april 1983 ... it was
philosophically sort of along the lines of "AIXI meets neural nets,"
although AIXI did not exist yet and I had not heard of neural nets.
It would have worked on a near-infinitely-powerful processor ;-)


Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] About the brain-emulation route to AGI

2007-01-22 Thread Eugen Leitl
On Mon, Jan 22, 2007 at 01:11:57PM -0500, Richard Loosemore wrote:

> This debate about the relative merits of the AGI and the Brain Emulation 
> methods of building an intelligence seems confused to me.

What is the Brain Emulation method? Are you talking about computational
neuroscience, or something?
 
> What exactly is meant by a "brain emulation" route anyway?

I'm not entirely sure (I haven't read it all yet), but the very 
beginning of this post strikes me as a desperate search for a strawman to 
demolish. 
 
> Is it:
> 
> A) Copy the exact structure and functioning of the brain's hardware, and 
> along the way get a precise understanding of the functional architecture 
> of the human brain, at all the various levels at which such an 
> architecture needs to be understood.
> 
> or
> 
> B) Copy the exact structure and functioning of the brain's hardware, but 
> ignore the architecture.
> 
> ?

Why do you think these are mutually exclusive alternatives? What makes
you think there is such a thing as architecture in the human sense sitting
in there for you to copy a blueprint from?
 
> An illustration of the difference:  You know nothing about electronics, 
> but you get hold of an extremely complex radio, and want to build one by 
> exactly "emulating" your example.  Do you try to do your emulation 

Um, wrong comparison. CNS doesn't require any new physics. Some approaches
start with atomically accurate models of compartments, which allows you
to reach down to arbitrary low level of theory in order to fetch missing
parameters. That's bottom up. Simultaneously, you have top-down empirical
data from neuron and tissue activity. You can use both to eliminate
the large but shrinking amount of unknown in the middle.

> without ever trying to understand the functions of transistors?  The 

Do you think that an atomically accurate copy of a radio wouldn't work?

> functions of all the various hardware components?  The general idea of 
> transmission of radio signals?  The modular structure of the radio set, 

But the brain is not a radio set. Specifically, it's not a human-designed
artifact, and has different signatures.

> with its tuning, frequency multiplexing, amplitude demodulation and 
> other circuits?  Do you ignore the functioning of the radio with respect 
> to the humans who use it?  The existence and distribution of radio 
> signal sources?

I don't understand your last two sentences. (In fact, I was going huh?
at a rate of about twice every sentence so far, but deconstructing your post
at this level would do no good so I won't).
 
> You could decide to care about all that stuff - that would be Route A - 
> or you could ignore it and just emulate the thing by brute force, cubic 
> micrometer by cubic micrometer - that would be Route B.

Of course some people do A, and some do B, and several others go for C and D.
 
> I presume that the brain emulation community is not being so daft as to 
> try B  but honestly, when I read people talking about this, they 

Actually, it is not at all daft to model a cubic micron or so of biology
from first principles, if you can extract nonobservable parameters (such
as a switching behaviour of a particular ion channel type, for instance)
from a MD level simulation. Have you ever considered how to write a 
learning simulation that ascends, by incrementally building upper abstraction
layers, and co-evolving hardware representation as it goes along?
It's certainly demanding, but not nearly as demanding as a full-blown 
AGI by explicit coding.

> often seem to be assuming a black and white division between A and B, 
> and more often than not they ARE assuming that what "brain emulation" 
> means is B - the dumb brute force method.

Maybe you're reading the wrong people. Or, misunderstand what they say.
 
> I have to say that if B is what is meant, the idea seems insane.  You 
> only need to get one little transistor junction out of place in your 
> simulation of the radio, and the entire thing might not work ... and if 
> you know nothing about the functionality, you are up the proverbial 
> creek.  Ditto for the brain.

The brain is not a radio. It's designed to work in a noisy environment, so
it's autohomeostating. You don't have to tune the oscillator precision
down to ppb levels in order for it to work, or break down horribly.
 
> How many errors can you afford to make before the brain simulation 
> becomes just as useless as a broken radio?  The point is WHO KNOWS?!  It 

Of course injecting errors into the simulation and look at trajectory
spread is a common technique, so perhaps someone does know, after all.

> is funny that this is so little appreciated.  For example, the B.E. 
> people could slave away on their data collection, and then at the last 
> minute realize that they also needed detailed information about the 
> spatial distribution of every single dendritic bouton on every neuron 

How about submolecular resolution, on parts of specific samp

Re: [agi] About the brain-emulation route to AGI

2007-01-22 Thread Benjamin Goertzel

On 1/22/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

This debate about the relative merits of the AGI and the Brain Emulation
methods of building an intelligence seems confused to me.

What exactly is meant by a "brain emulation" route anyway?

Is it:

A) Copy the exact structure and functioning of the brain's hardware, and
along the way get a precise understanding of the functional architecture
of the human brain, at all the various levels at which such an
architecture needs to be understood.

or

B) Copy the exact structure and functioning of the brain's hardware, but
ignore the architecture.


Either A or B is potentially viable, of course.

I would guess that A will come first, in that (a few decades from now)
we will likely use moderately-accurate brain-scanning to arrive at a
good enough understanding of the brain that we are able to emulate it
in software ... and I think this will likely happen before we have
extremely-accurate brain scanners as would be required for option B.

But of course, predicting the (relative or absolute) timing of various
future scientific developments is much harder than predicting what is
possible!

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] (video)The Future of Cognitive Computing

2007-01-22 Thread Joel Pitt

On 1/22/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

One thing I find interesting is that IBM is focusing their AGI-ish
efforts so tightly on human-brain-emulation-related approaches.


I personally question the ethics of achieving human-brain-emulation AGI.

If you believe a person's essence is the patterning of neurons (which
I know not everyone feels is the whole story, but I generally do),
then by emulating the brain or experimenting with it you are
essentially using a human consciousness as your testing ground.

Maybe if you scanned and emulated a babies brain and let it's
consciousness develop it'd be a different story. But I don't think
people will get legal consent to carry out such an attempt on a
newborn.

Engineered AGI allows us to start out with essentially a newborn mind
and have it develop consciousness through world experience and
self-reflection. If we get it wrong, then consciousness doesn't form
(or we all die).

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] About the brain-emulation route to AGI

2007-01-22 Thread A. T. Murray
http://mind.sourceforge.net/Mind.html is a True AI 
that emulates the human brain as hypothesized in the 
http://mind.sourceforge.net/theory5.html theory of mind.

http://aimind-i.com is on off-shoot of the Mentifex Mind.Forth AI
that is still on track to trigger a Technological Singularity by 
http://www.blogcharm.com/Singularity/25603/Timetable.html 2012.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] (video)The Future of Cognitive Computing

2007-01-22 Thread Matt Mahoney
The issues of consciousness have been discussed on the singularity list.  These 
are hard questions.

- If your brain was scanned and backed up to disk, would you still be conscious 
after you die?
- Does a thermostat want to keep the room at a constant temperature, or does it 
only behave as if that is what it wants?  (Ask this question about human 
behavior).
- Do you control your own thoughts, or is your brain a computer whose outputs 
are predictable given its inputs and internal state?

The questions are hard because humans (and other animals) are programmed 
through evolution to fear death and to believe in free will (ability to control 
ones thoughts and the environment).  Those animals that behaved differently did 
not propagate their DNA.  I assume you would want to program an AGI to behave 
as if it believed in its own consciousness.

I don't expect to convince anyone, even myself, that consciousness does not 
exist.  Such a belief could be fatal, if it were even possible.

That said, I think it does not matter whether an AGI is created by copying 
someone's brain or by modeling the child develpmental process and training.  
Either way the result is an approximation of a human.  But the moral issue does 
not go away.  The question of whether such a thing should have human rights 
puts us in deep conflict with our own biological programming.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Joel Pitt <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, January 22, 2007 4:21:42 PM
Subject: Re: [agi] (video)The Future of Cognitive Computing

On 1/22/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> One thing I find interesting is that IBM is focusing their AGI-ish
> efforts so tightly on human-brain-emulation-related approaches.

I personally question the ethics of achieving human-brain-emulation AGI.

If you believe a person's essence is the patterning of neurons (which
I know not everyone feels is the whole story, but I generally do),
then by emulating the brain or experimenting with it you are
essentially using a human consciousness as your testing ground.

Maybe if you scanned and emulated a babies brain and let it's
consciousness develop it'd be a different story. But I don't think
people will get legal consent to carry out such an attempt on a
newborn.

Engineered AGI allows us to start out with essentially a newborn mind
and have it develop consciousness through world experience and
self-reflection. If we get it wrong, then consciousness doesn't form
(or we all die).

-- 
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] About the brain-emulation route to AGI

2007-01-22 Thread Matt Mahoney
I think that the path to AGI is a combination of cognitive and computer science 
approaches.  Unfortunately there is not a lot of cooperation between these two 
fields.  It is rare to find experts in both psychology and mathematics.  But I 
think the approach will be fruitful.

Example 1.  Hebb proposed a neural model of learning in 1949.  The model became 
widely accepted due to the successes of artificial neural network simulations 
decades before it was confirmed in biological tissue that synapses can change 
their state.

Example 2.  It is well known that children learn the meanings of words 
(semantics) before they learn to form sentences (syntax).  But this lesson was 
lost on many AI researchers.  Thus, we have the failure to develop natural 
language parsers in the absence of semantics, and the success of information 
retrieval systems that ignore word order.

Example 3.  It is less well known that babies learn to segment continuous 
speech at 7-10 months, before they learn any words.  This leads to the 
discovery that the rules for segmenting text without spaces can be learned 
without a dictionary.  http://cs.fit.edu/~mmahoney/dissertation/lex1.html

I think AGI will be solved when computer scientists, psychologists, and 
neurologists work together to solve the problem with a combination of computer, 
human, and animal experiments.
 
-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303