Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bob Mottram
Although I thought this was a good talk and I liked the fellow
presenting it to me it seems fairly clear that little or no progress
has been made in this area over the last decade or so.  In the early
1990s I wrote somewhat similar simulations where agents had their own
neural networks whose architecture was specified by a genetic
algorithm, but just like the speaker I came up against similar
problems.

As the guy says it should be in principle possible to go all the way
from simple types of creatures up to more complex ones, like humans.
In practice though what tends to happen is that the complexity of the
neural nets reaches a plateau from which little subsequent progress
occurs.  Even after allowing the system to run for tens of thousands
of generations not much of interest happens.

I think the main problem here is the low complexity of the environment
and the agents themselves.  In a real biological system there are all
kinds of niches which can be exploited in a variety of ways, but in
polyworld (and other similar simulations) it's all very homogeneous.
Real biological creatures are coalitions of millions of cells, each of
which is a chemical factory containing an abundance of nano machinery,
each of which is a possible site for evolutionary change.  The sensory
systems of real creatures are also far richer than simply being able
to detect three colours (even molluscs can do better than this), and
this is obviously a limiting factor upon the development of greater
intelligence.



On 15/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
 This may be of interest to the group.

 http://video.google.com/videoplay?docid=-112735133685472483


 This presentation is about a potential shortcut to artificial
 intelligence by trading mind-design for world-design using artificial
 evolution. Evolutionary algorithms are a pump for turning CPU cycles
 into brain designs. With exponentially increasing CPU cycles while our
 understanding of intelligence is almost a flat-line, the evolutionary
 route to AI is a centerpiece of most Kurzweilian singularity
 scenarios. This talk introduces the Polyworld artificial life
 simulator as well as results from our ongoing attempt to evolve
 artificial intelligence and further the Singularity.

 Polyworld is the brain child of Apple Computer Distinguished Scientist
 Larry Yaeger, who remains the primary developer of Polyworld:

 http://www.beanblossom.in.us/larryy/P...

 Speaker: Virgil Griffith
 Virgil Griffith is a first year graduate student in Computation and
 Neural Systems at the California Institute of Technology. On weekdays
 he studies evolution, computational neuroscience, and artificial life.
 He did computer security work until his first year of university when
 his work got him sued for sedition and espionage. He then decided that
 security was probably not safest field to be in and he turned his life
 to science. (less)
 Added: November 13, 2007

 - Jef

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65298881-4c0739


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Vladimir Nesov
Yes, resulted behaviors are not impressive, I did similar thing with
essentially 1 hidden layer perceptron on 2D square grid in high school
and got something that looked not much simpler (weak creatures cycling
around gathering, fat carnivores in the center hunting them, few
superfat parasites among carnivores vampiring off them).

I think such environment needs system of cues that are useful (for
survival) to be known during creature's lifetime, which are changing
and getting more complex across generations. This way there would be
an incentive to develop memory and nontrivial decision making from
observations. As it is, it's not clear how even trained human would do
given those limited perceptions.


On 11/15/07, Bob Mottram [EMAIL PROTECTED] wrote:
 Although I thought this was a good talk and I liked the fellow
 presenting it to me it seems fairly clear that little or no progress
 has been made in this area over the last decade or so.  In the early
 1990s I wrote somewhat similar simulations where agents had their own
 neural networks whose architecture was specified by a genetic
 algorithm, but just like the speaker I came up against similar
 problems.

 As the guy says it should be in principle possible to go all the way
 from simple types of creatures up to more complex ones, like humans.
 In practice though what tends to happen is that the complexity of the
 neural nets reaches a plateau from which little subsequent progress
 occurs.  Even after allowing the system to run for tens of thousands
 of generations not much of interest happens.

 I think the main problem here is the low complexity of the environment
 and the agents themselves.  In a real biological system there are all
 kinds of niches which can be exploited in a variety of ways, but in
 polyworld (and other similar simulations) it's all very homogeneous.
 Real biological creatures are coalitions of millions of cells, each of
 which is a chemical factory containing an abundance of nano machinery,
 each of which is a possible site for evolutionary change.  The sensory
 systems of real creatures are also far richer than simply being able
 to detect three colours (even molluscs can do better than this), and
 this is obviously a limiting factor upon the development of greater
 intelligence.



 On 15/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
  This may be of interest to the group.
 
  http://video.google.com/videoplay?docid=-112735133685472483
 
 
  This presentation is about a potential shortcut to artificial
  intelligence by trading mind-design for world-design using artificial
  evolution. Evolutionary algorithms are a pump for turning CPU cycles
  into brain designs. With exponentially increasing CPU cycles while our
  understanding of intelligence is almost a flat-line, the evolutionary
  route to AI is a centerpiece of most Kurzweilian singularity
  scenarios. This talk introduces the Polyworld artificial life
  simulator as well as results from our ongoing attempt to evolve
  artificial intelligence and further the Singularity.
 
  Polyworld is the brain child of Apple Computer Distinguished Scientist
  Larry Yaeger, who remains the primary developer of Polyworld:
 
  http://www.beanblossom.in.us/larryy/P...
 
  Speaker: Virgil Griffith
  Virgil Griffith is a first year graduate student in Computation and
  Neural Systems at the California Institute of Technology. On weekdays
  he studies evolution, computational neuroscience, and artificial life.
  He did computer security work until his first year of university when
  his work got him sued for sedition and espionage. He then decided that
  security was probably not safest field to be in and he turned his life
  to science. (less)
  Added: November 13, 2007
 
  - Jef
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65300965-1c5cc1


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 02:30, Bob Mottram wrote:
 I think the main problem here is the low complexity of the
 environment

Complex programs can only be written in an environment capable of 
bearing that complexity:

http://sl4.org/archive/0710/16880.html

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65328088-2120d3


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Russell Wallace
On Nov 15, 2007 2:16 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 I remember playing with PolyWorld 10 years ago or so

Yeah. I've only had time to watch the first 20 minutes of that talk
but my reaction so far is disappointment: it's just exactly the same
as it was a decade ago? Modern hardware should be able to do better.
(Correct me if advances are presented in the later part of the talk.)

 Overall, I came away from my flirtation with Alife with the impression that
 it was doomed due to the lack of a viable artificial chemistry (chemistry
 arguably being the source of the richness of real biology).

The closest I've ever seen to artificial chemistry is an experiment I
did some years ago in evolving Go-playing programs; I didn't get
anything that used strategy as humans or even hand-written programs
understand it, but it had the paper-scissors-stone _feel_ of
biochemistry, which makes sense in hindsight: Go is rich enough to
support something on that level.

Though I think physics - of the ordinary everyday variety - is the
biggest missing element if one is trying to get animal-type
intelligence out of a Polyworld-type environment. Use modern graphics
hardware, give the simulated critters a 512x512 or somesuch camera
view, make simulated bodies with a decently large number of degrees of
freedom and contact sensors, a brain specified by general computation
and big enough to do something with all those inputs and outputs, and
I think you could get something a lot further on the road to an
artificial lizard than has been produced thus far.

 And then I decided Alife was not gonna be a shortcut and turned wholly to AI
 insetad ;-)

Same here :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65342185-3e2eea


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-15 Thread Richard Loosemore

Mike Tintner wrote:


Sounds a little confusing. Sounds like you plan to evolve a system 
through testing thousands of candidate mechanisms. So one way or 
another you too are taking a view - even if it's an evolutionary, I'm 
not taking a view view -  on, and making a lot of asssumptions about


-how systems evolve
-the known architecture of human cognition.


No, I think because of the paucity of information I gave you have 
misunderstood slightly.


Everything I mentioned was in the context of an extremely detailed 
framework that tries to include all of the knowledge we have so far 
gleaned by studying human cognition using the methods of cognitive science.


So I am not making assumptions about the architecture of human cognition 
I am using every scrap of experimental data I can.  You can say that 
this is still assuming that the framework is correct, but that is 
nothing compared to the usual assumptions made in AI, where the 
programmer just picks up a grab bag of assorted ideas that are floating 
around in the literature (none of them part of a coherent theory of 
cognition) and starts hacking.


And just because I talk of thousands of candidate mechanisms, that does 
not mean that there is evolution involved:  it just means that even with 
a complete framework for human cognition to start from there are still 
so many questions about the low-level to high-level linkage that a vast 
number of mechanisms have to be explored.



about which science has extremely patchy and confused knowledge. I don't 
see how any system-builder can avoid taking a view of some kind on such 
matters, yet you seem to be criticising Ben for so doing.


Ben does not start from a complete framework for human cognition, nor 
does he feel compelled to stick close to the human model, and my 
criticisms (at least in this instance) are not really about whether or 
not he has such a framework, but about a problem that I can see on his 
horizon.



I was hoping that you also had some view on how a system 's symbols 
should be grounded, especially since you mention Harnad, who does make 
vague gestures towards the brain's levels of grounding. But you don't 
indicate any such view.


On the contrary, I explained exactly how they would be grounded:  if the 
system is allowed to build its own symbols *without* me also inserting 
ungrounded (i.e. interpreted, programmer-constructed) symbols and 
messing the system up by forcing it to use both sorts of symbols, then 
ipso fact it is grounded.


It is easy to build a grounded system.  The trick is to make it both 
grounded and intelligent at the same time.  I have one strategy for 
ensuring that it turns out intelligent, and Ben has another  my 
problem with Ben's strategy is that I believe his attempt to ensure that 
the system is intelligent ends up compromising the groundedness of the 
system.



Sounds like you too, pace MW, are hoping for a number of miracles - IOW 
creative ideas - to emerge, and make your system work.


I don't understand where I implied this.  You have to remember that I am 
doing this within a particular strategy (outlined in my CSP paper). 
When you see me exploring 'thousands' of candidate mechanisms to see how 
one parameter plays a role, this is not waiting for a miracle, it is a 
vital part of the strategy.  A strategy that, I claim, is the only 
viable one.




Anyway, you have to give Ben credit for putting a lot of his stuff  
principles out there  on the line. I think anyone who wants to mount a 
full-scale assault on him ( why not?) should be prepared to reciprocate.


Nice try, but there are limits to what I can do to expose the details. 
I have not yet worked out how much I should release and how much to 
withhold (I confess, I nearly decided to go completely public a month or 
so ago, but then changed my mind after seeing the dismally poor response 
that even one of the ideas provoked).  Maybe in the near future I will 
write a summary account.


In the mean time, yes, it is a little unfair of me to criticise other 
projects.  But not that unfair.  When a scientist sees a big problem 
with a theory, do you suppose they wait until they have a completely 
worked out alternative before discussing the fact that there is a 
problem with the theory that other people may be praising?  That is not 
the way of science.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65349870-56ef76


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
About PolyWorld and Alife in general...

I remember playing with PolyWorld 10 years ago or so  And, I had a grad
student at Uni. of Western Australia build a similar system, back in my
Perth days... (it was called SEE, for Simple Evolving Ecology.  We never
published anything on it, as I left Australia in the middle of the
research...)

But after fiddling with stuff like this a while, it becomes clear that, just
as each GOFAI or machine learning program can be pushed so far and no
further; similarly each Alife program can be pushed so far and no further...

One of the most fascinatng busts in that area was Tom Ray's attempt to
induce robust virtual evolution of multicellular life.  I forget the name of
his project but he was doing it at ATR in Japan.  It was a follow-up to his
excellently successful Tierra program, which was the first to demonstrate
biology-like reproduction in artificial organisms  Anyway Tom's attempt
and many others to to beyond the complexity threshold observed in Alife
programs did not pan out...

Overall, I came away from my flirtation with Alife with the impression that
it was doomed due to the lack of a viable artificial chemistry (chemistry
arguably being the source of the richness of real biology).

So, there was some cool work on artificial chemistry of a sort, done by
Walter Fontana and many others, which I don't remember very well...

The deep question I came away with was: What exactly are the **abstract
properties** of the periodic table of elements that allows it to give rise
to chemical compounds and ensuing biological structures with so much
complexity?

And then I decided Alife was not gonna be a shortcut and turned wholly to AI
insetad ;-)

Thing is, I'm sure Alife can work, but the computational requirements have
gotta be way way bigger than for AI.  And conceptually, it doesn't seem like
Alife is really a shortcut -- because puzzling out the requirements that
artificial chemistry needs to have, in order to support robust artificial
biology, seems just as hard or harder than building a simulated brain or a
non-brain-based AGI.  After all it's not like we know how real chemistry
gives rise to real biology yet --- the dynamics underlying protein-folding
remain ill-understood, etc. etc.

So I find this a deep and fascinating area of research (the borderline btw
artificial chemistry and artificial biology, more so than Alife proper), but
I doubt it's a shortcut to AGI ... though it would be cool to be proven
wrong ;-)

-- Ben G



On Nov 15, 2007 3:30 AM, Bob Mottram [EMAIL PROTECTED] wrote:

 Although I thought this was a good talk and I liked the fellow
 presenting it to me it seems fairly clear that little or no progress
 has been made in this area over the last decade or so.  In the early
 1990s I wrote somewhat similar simulations where agents had their own
 neural networks whose architecture was specified by a genetic
 algorithm, but just like the speaker I came up against similar
 problems.

 As the guy says it should be in principle possible to go all the way
 from simple types of creatures up to more complex ones, like humans.
 In practice though what tends to happen is that the complexity of the
 neural nets reaches a plateau from which little subsequent progress
 occurs.  Even after allowing the system to run for tens of thousands
 of generations not much of interest happens.

 I think the main problem here is the low complexity of the environment
 and the agents themselves.  In a real biological system there are all
 kinds of niches which can be exploited in a variety of ways, but in
 polyworld (and other similar simulations) it's all very homogeneous.
 Real biological creatures are coalitions of millions of cells, each of
 which is a chemical factory containing an abundance of nano machinery,
 each of which is a possible site for evolutionary change.  The sensory
 systems of real creatures are also far richer than simply being able
 to detect three colours (even molluscs can do better than this), and
 this is obviously a limiting factor upon the development of greater
 intelligence.



 On 15/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
  This may be of interest to the group.
 
  http://video.google.com/videoplay?docid=-112735133685472483
 
 
  This presentation is about a potential shortcut to artificial
  intelligence by trading mind-design for world-design using artificial
  evolution. Evolutionary algorithms are a pump for turning CPU cycles
  into brain designs. With exponentially increasing CPU cycles while our
  understanding of intelligence is almost a flat-line, the evolutionary
  route to AI is a centerpiece of most Kurzweilian singularity
  scenarios. This talk introduces the Polyworld artificial life
  simulator as well as results from our ongoing attempt to evolve
  artificial intelligence and further the Singularity.
 
  Polyworld is the brain child of Apple Computer Distinguished Scientist
  Larry Yaeger, who remains the primary 

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
I think that linguistic interaction with human beings is going to be what
lifts Second Life proto-AGI's beyond the glass ceiling...

Our first SL agents won't have language generation or language learning
capability, but I think that introducing it is really essential, esp. given
the limitations of SL as a purely physical environment...

ben

On Nov 15, 2007 1:38 PM, Bob Mottram [EMAIL PROTECTED] wrote:

 Which raises the question of whether the same complexity glass ceiling
 will be encountered when running AGI controlled agents within Second
 Life.  SL is probably more complex than polyworld, although that could
 be debatable depending upon your definition of complexity.  One factor
 which would raise the bar would be the additional baggage being
 introduced into the virtual world from the first life of human
 participants.


 On 15/11/2007, Bryan Bishop [EMAIL PROTECTED] wrote:
  On Thursday 15 November 2007 02:30, Bob Mottram wrote:
   I think the main problem here is the low complexity of the
   environment
 
  Complex programs can only be written in an environment capable of
  bearing that complexity:
 
  http://sl4.org/archive/0710/16880.html
 
  - Bryan
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65511033-66e22b

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bob Mottram
Which raises the question of whether the same complexity glass ceiling
will be encountered when running AGI controlled agents within Second
Life.  SL is probably more complex than polyworld, although that could
be debatable depending upon your definition of complexity.  One factor
which would raise the bar would be the additional baggage being
introduced into the virtual world from the first life of human
participants.


On 15/11/2007, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Thursday 15 November 2007 02:30, Bob Mottram wrote:
  I think the main problem here is the low complexity of the
  environment

 Complex programs can only be written in an environment capable of
 bearing that complexity:

 http://sl4.org/archive/0710/16880.html

 - Bryan

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65507313-967074


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote:
  non-brain-based AGI. After all it's not like we know how real
  chemistry gives rise to real biology yet --- the dynamics underlying
  protein-folding remain ill-understood, etc. etc.

 Can anybody elaborate on the actual problems remaining (beyond etc.
 etc.-- which is appropriate from Ben who is most notably not a
 biochemist/chemist/bioinformatician)?


Hey -- That is a funny comment -- I've published a dozen bioinformatics
papers
in the last 5 years, and am CEO / Chief Scientist of a bioinformatics
company (Biomind LLC, www.biomind.com) 

I am no chemist but I'm pretty much an expert on analyzing microarray
and SNP data, and various other corners of bioinformatics, having introduced
some
funky new techniques into the field.  In fact my most popular research paper
is not on AGI but rather on Chronic Fatigue Syndrome -- it was the
first-ever
paper giving evidence for a (weak) genetic basis for CFS.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65715822-29017b

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
No worries!! just wanted to clarify...

To address your question more usefully: There is soo much evidence
that chemistry is subtly important for biology in ways that are poorly
understood.

In neuroscience for instance the chemistry of synaptic transmission btw
neurons is still weakly understood, so we still don't know exactly how poor
a model the formal neuron used in computer science is  As a single
example you have both ionotropic and metabotropic glutamate receptors
along neurons ... whose synaptic transmission properties depend on
ambient chemistry in the intracellular medium in ways no one understands
really.. etc. etc. etc. ;-)

ben

On Nov 15, 2007 10:07 PM, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote:
  On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
   Can anybody elaborate on the actual problems remaining (beyond
   etc. etc.-- which is appropriate from Ben who is most notably not
   a biochemist/chemist/bioinformatician)?
 
  Hey -- That is a funny comment

 Oh my. This is a big, big mistake on my part. I am sorry. Please accept
 my apologies .. and the knowledge that my parenthetical comment no
 longer applies.

 - Bryan

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65726263-86dc00

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote:
 On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
  Can anybody elaborate on the actual problems remaining (beyond
  etc. etc.-- which is appropriate from Ben who is most notably not
  a biochemist/chemist/bioinformatician)?

 Hey -- That is a funny comment

Oh my. This is a big, big mistake on my part. I am sorry. Please accept 
my apologies .. and the knowledge that my parenthetical comment no 
longer applies.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65722472-5cdf65


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 21:19, Benjamin Goertzel wrote:
  so we still don't know exactly how poor
 a model the formal neuron used in computer science is

Speaking of which: isn't this the age-old simple math function involving 
an integral or two and a summation over the inputs? I remember seeing 
this many years ago (before I knew its importance) on ai-junkie or 
maybe from Jeff Hawkins' On Intelligence. Way back when.

And clearly I haven't been keeping track of the literature on neuronal 
modeling, but I would hope that there are other models out there by 
now. I need to read more journals.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65727925-e235c5


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Vladimir Nesov
Here's an impressive movie:
http://video.google.com/videoplay?docid=-2874207418572601262
Henry Markram, EPFL/BlueBrain: The Emergence of Intelligence in the
Neocortical Microcircuit

On 11/16/07, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Thursday 15 November 2007 21:19, Benjamin Goertzel wrote:
   so we still don't know exactly how poor
  a model the formal neuron used in computer science is

 Speaking of which: isn't this the age-old simple math function involving
 an integral or two and a summation over the inputs? I remember seeing
 this many years ago (before I knew its importance) on ai-junkie or
 maybe from Jeff Hawkins' On Intelligence. Way back when.

 And clearly I haven't been keeping track of the literature on neuronal
 modeling, but I would hope that there are other models out there by
 now. I need to read more journals.

 - Bryan

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65733872-1e3d2e