Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Richard Loosemore

Ben Goertzel wrote:

http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/


There are always more papers that can be discussed.


OK, sure, but this is a more recent paper **by the same authors,
discussing the same data***
and more recent similar data.


But that does not change the fact that we provided arguments to back up our
claims, when we analyzed the original Quiroga et al paper, and all the
criticism directed against our paper on this list, in the last week or so,
has completely ignored the actual content of that argument.


My question is how your arguments apply to their more recent paper
discussing the same data

It seems to me that their original paper was somewhat sloppy in the
theoretical discussion accompanying the impressive data, and you
largely correctly picked on their sloppy theoretical discussion ...
and now, their more recent works have cleaned up much of the
sloppiness of their earlier theoretical discussions.

Do you disagree with this?


Nope, don't disagree:  I just haven't had time to look at their paper yet.



It's not very interesting to me to dissect the sloppy theoretical
discussion at the end of an experimental paper from a few years ago.
What is more interesting to me is whether the core ideas underlying
the researchers' work are somehow flawed.  If their earlier discussion
was sloppy and was pushed back on by their peers, leading to a clearer
theoretical discussion in their current papers, then that means that
the scientific community is basically doing what it's supposed to
do


That is fine.

But when evaluating our particular critique, it is only fair to keep it 
in its proper context.  We set out to pick a collection of the most 
widely publicized neuroscience papers, to see how they looked from the 
point of view of a sophisticated understanding of cognitive science.


Our conclusion was that, TAKEN AS A WHOLE, this set of representative 
papers were interpreting their results in ways that not very coherent. 
Rather than advancing the cause of cognitive science, they were turning 
the clock back to an era when we knew very little about what might be 
going on.


If Quiroga et al do a better job now, then that is all to the good.  But 
Harley and I had a broader perspective, and we feel that the overall 
standards are pretty low.






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Ben Goertzel
>>
>> http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/
>
>
> There are always more papers that can be discussed.

OK, sure, but this is a more recent paper **by the same authors,
discussing the same data***
and more recent similar data.

>
> But that does not change the fact that we provided arguments to back up our
> claims, when we analyzed the original Quiroga et al paper, and all the
> criticism directed against our paper on this list, in the last week or so,
> has completely ignored the actual content of that argument.

My question is how your arguments apply to their more recent paper
discussing the same data

It seems to me that their original paper was somewhat sloppy in the
theoretical discussion accompanying the impressive data, and you
largely correctly picked on their sloppy theoretical discussion ...
and now, their more recent works have cleaned up much of the
sloppiness of their earlier theoretical discussions.

Do you disagree with this?

It's not very interesting to me to dissect the sloppy theoretical
discussion at the end of an experimental paper from a few years ago.
What is more interesting to me is whether the core ideas underlying
the researchers' work are somehow flawed.  If their earlier discussion
was sloppy and was pushed back on by their peers, leading to a clearer
theoretical discussion in their current papers, then that means that
the scientific community is basically doing what it's supposed to
do

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-25 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

It might be more useful to discuss more recent papers by the same
authors regarding the same topic, such as the more accurately-titled

***
Sparse but not "Grandmother-cell" coding in the medial temporal lobe.
Quian Quiroga R, Kreiman G, Koch C and Fried I.
Trends in Cognitive Sciences. 12: 87-91; 2008
***

at

http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/



There are always more papers that can be discussed.

But that does not change the fact that we provided arguments to back up 
our claims, when we analyzed the original Quiroga et al paper, and all 
the criticism directed against our paper on this list, in the last week 
or so, has completely ignored the actual content of that argument.







Richard Loosemore











On Mon, Nov 24, 2008 at 1:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:

Hi,

BTW, I just read this paper



For example, in Loosemore & Harley (in press) you can find an analysis of
a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the
latter
try to claim they have evidence in favor of grandmother neurons (or
sparse
collections of grandmother neurons) and against the idea of distributed
representations.

which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.


The claim that Harley and I made - which you quote above - was the
*conclusion* sentence that summarized a detailed explanation of our
reasoning.

That reasoning was in our original paper, and I also went to the trouble of
providing a longer version of it in one of my last posts on this thread.  I
showed, in that argument, that their claims about sparse vs distributed
representations were incoherent, because they had not thought through the
implications contained in their own words - part of which you quote below.

Merely quoting their words again, without resolving the inconsistencies that
we pointed out, proves nothing.

We analyzed that paper because it was one of several that engendered a huge
amount of publicity.  All of that publicity - which, as far as we can see,
the authors did not have any problem with - had to do with the claims about
grandmother cells, sparseness and distributed representations.  Nobody - not
I, not Harley, and nobody else as far as I know - disputes that the
empirical data were interesting, but that is not the point:  we attacked
their paper because of their conclusion about the theoretical issue of
sparse vs distributed representations, and the wider issue about grandmother
cells.  In that context, it is not true that, as you put it below, the
authors "only [claimed] to have gathered some information on empirical
constraints on how neural knowledge representation may operate".  They went
beyond just claiming that they had gathered some relevant data:  they tried
to say what that data implied.



Richard Loosemore








Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invarian

Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Ben Goertzel
Richard,

It might be more useful to discuss more recent papers by the same
authors regarding the same topic, such as the more accurately-titled

***
Sparse but not "Grandmother-cell" coding in the medial temporal lobe.
Quian Quiroga R, Kreiman G, Koch C and Fried I.
Trends in Cognitive Sciences. 12: 87-91; 2008
***

at

http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/

-- Ben G

On Mon, Nov 24, 2008 at 1:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> Hi,
>>
>> BTW, I just read this paper
>>
>>
>>> For example, in Loosemore & Harley (in press) you can find an analysis of
>>> a
>>> paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the
>>> latter
>>> try to claim they have evidence in favor of grandmother neurons (or
>>> sparse
>>> collections of grandmother neurons) and against the idea of distributed
>>> representations.
>>
>> which I found at
>>
>>  http://www.vis.caltech.edu/~rodri/
>>
>> and I strongly disagree that
>>
>>> We showed their conclusion to be incoherent.  It was deeply implausible,
>>> given the empirical data they reported.
>
>
> The claim that Harley and I made - which you quote above - was the
> *conclusion* sentence that summarized a detailed explanation of our
> reasoning.
>
> That reasoning was in our original paper, and I also went to the trouble of
> providing a longer version of it in one of my last posts on this thread.  I
> showed, in that argument, that their claims about sparse vs distributed
> representations were incoherent, because they had not thought through the
> implications contained in their own words - part of which you quote below.
>
> Merely quoting their words again, without resolving the inconsistencies that
> we pointed out, proves nothing.
>
> We analyzed that paper because it was one of several that engendered a huge
> amount of publicity.  All of that publicity - which, as far as we can see,
> the authors did not have any problem with - had to do with the claims about
> grandmother cells, sparseness and distributed representations.  Nobody - not
> I, not Harley, and nobody else as far as I know - disputes that the
> empirical data were interesting, but that is not the point:  we attacked
> their paper because of their conclusion about the theoretical issue of
> sparse vs distributed representations, and the wider issue about grandmother
> cells.  In that context, it is not true that, as you put it below, the
> authors "only [claimed] to have gathered some information on empirical
> constraints on how neural knowledge representation may operate".  They went
> beyond just claiming that they had gathered some relevant data:  they tried
> to say what that data implied.
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
>> Their conclusion, to quote them, is that
>>
>> "
>> How neurons encode different percepts is one of the most intriguing
>> questions in neuroscience. Two extreme hypotheses are
>> schemes based on the explicit representations by highly selective
>> (cardinal, gnostic or grandmother) neurons and schemes that rely on
>> an implicit representation over a very broad and distributed population
>> of neurons1–4,6. In the latter case, recognition would require the
>> simultaneous activation of a large number of cells and therefore we
>> would expect each cell to respond to many pictures with similar basic
>> features. This is in contrast to the sparse firing we observe, because
>> most MTL cells do not respond to the great majority of images seen
>> by the patient. Furthermore, cells signal a particular individual or
>> object in an explicit manner27, in the sense that the presence of the
>> individual can, in principle, be reliably decoded from a very small
>> number of neurons.We do not mean to imply the existence of single
>> neurons coding uniquely for discrete percepts for several reasons:
>> first, some of these units responded to pictures of more than one
>> individual or object; second, given the limited duration of our
>> recording sessions, we can only explore a tiny portion of stimulus
>> space; and third, the fact that we can discover in this short time some
>> images—such as photographs of Jennifer Aniston—that drive the
>> cells suggests that each cell might represent more than one class of
>> images. Yet, this subset of MTL cells is selectively activated by
>> different views of individuals, landmarks, animals or objects. This
>> is quite distinct from a completely distributed population code and
>> suggests a sparse, explicit and invariant encoding of visual percepts in
>> MTL.
>> "
>>
>> The only thing that bothers me about the paper is that the title
>>
>> "
>> Invariant visual representation by single neurons in
>> the human brain
>> "
>>
>> does not actually reflect the conclusions drawn.  A title like
>>
>> "
>> Invariant visual representation by sparse neuronal population encodings
>> the human brain
>> "
>>
>> would have reflected their actual conclus

Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Richard Loosemore

Ben Goertzel wrote:

Hi,

BTW, I just read this paper



For example, in Loosemore & Harley (in press) you can find an analysis of a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.


which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.



The claim that Harley and I made - which you quote above - was the 
*conclusion* sentence that summarized a detailed explanation of our 
reasoning.


That reasoning was in our original paper, and I also went to the trouble 
of providing a longer version of it in one of my last posts on this 
thread.  I showed, in that argument, that their claims about sparse vs 
distributed representations were incoherent, because they had not 
thought through the implications contained in their own words - part of 
which you quote below.


Merely quoting their words again, without resolving the inconsistencies 
that we pointed out, proves nothing.


We analyzed that paper because it was one of several that engendered a 
huge amount of publicity.  All of that publicity - which, as far as we 
can see, the authors did not have any problem with - had to do with the 
claims about grandmother cells, sparseness and distributed 
representations.  Nobody - not I, not Harley, and nobody else as far as 
I know - disputes that the empirical data were interesting, but that is 
not the point:  we attacked their paper because of their conclusion 
about the theoretical issue of sparse vs distributed representations, 
and the wider issue about grandmother cells.  In that context, it is not 
true that, as you put it below, the authors "only [claimed] to have 
gathered some information on empirical constraints on how neural 
knowledge representation may operate".  They went beyond just claiming 
that they had gathered some relevant data:  they tried to say what that 
data implied.




Richard Loosemore








Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invariant visual representation by sparse neuronal population encodings
the human brain
"

would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says

"
We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
"

I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www

Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Mike Tintner

Ben,

Thanks for this analysis. V interesting. A question:

Are these investigations all being framed along the lines of :  "are 
invariant representations encoded in single neurons/sparse neuronal 
populations/distributed neurons?" IOW the *location* of the representation? 
Is anyone actually speculating about what *form* the invariant 
representation takes? What form IOW will the Jennifer Aniston concept take 
in the brain? Will it be, say,  a visual face, or the symbols "Jennifer 
Aniston", or some mentalese abstract symbols (whatever they might be), or 
what? Until you speculate about the invariant form, it seems to me, your 
investigations are going to be somewhat confused.


Ben:
BTW, I just read this paper


For example, in Loosemore & Harley (in press) you can find an analysis of 
a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the 
latter

try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.


which I found at

http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.


Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invariant visual representation by sparse neuronal population encodings
the human brain
"

would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says

"
We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
"

I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Ben Goertzel
Hi,

BTW, I just read this paper


> For example, in Loosemore & Harley (in press) you can find an analysis of a
> paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
> try to claim they have evidence in favor of grandmother neurons (or sparse
> collections of grandmother neurons) and against the idea of distributed
> representations.

which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that

> We showed their conclusion to be incoherent.  It was deeply implausible,
> given the empirical data they reported.

Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invariant visual representation by sparse neuronal population encodings
the human brain
"

would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says

"
We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
"

I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

I don't think Qiroga et al's statements are contradictory, just
irritatingly vague...

I agree w Richard that the distributed vs sparse dichotomy is poorly
framed and in large part a bogus dichotomy

I feel the same way about the symbolic vs subsymbolic dichotomy...

Many of the conceptual distinctions at the heart of standard cognitive
science theory are very poorly defined, it's disappointing...


Well, we agree on that much then. ;-)


All I can say is that I am working my way through the entire corpus of 
knowledge in cog sci, attempting to unify it in such a way that it 
really does all hang together, and become well defined enough to be both 
testable and buildable as a complete AGI.


The paper I wrote with Harley, and the more recent one on consciousness, 
were just a couple of opening salvos in that effort.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
I don't think Qiroga et al's statements are contradictory, just
irritatingly vague...

I agree w Richard that the distributed vs sparse dichotomy is poorly
framed and in large part a bogus dichotomy

I feel the same way about the symbolic vs subsymbolic dichotomy...

Many of the conceptual distinctions at the heart of standard cognitive
science theory are very poorly defined, it's disappointing...

-- ben G

On Sat, Nov 22, 2008 at 12:03 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Vladimir Nesov wrote:
>>
>> On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> They want some kind of mixture of "sparse" and "multiply redundant" and
>>> "not
>>> distributed".  The whole point of what we wrote was that there is no
>>> consistent interpretation of what they tried to give as their conclusion.
>>>  If you think there is, bring it out and put it side by side with what we
>>> said.
>>>
>>
>> There is always a consistent interpretation that drops their
>> interpretation altogether and leaves the data. I don't see their
>> interpretation as strongly asserting anything. They are just saying
>> the same thing in a different language you don't like or consider
>> meaningless, but it's a question of definitions and style, not
>> essence, as long as the audience of the paper doesn't get confused.
>>
>
> Let me spell it out carefully.
>
> If we try to buy their suggestion that the MTL represents concepts (such as
> "Jennifer Aniston") in a "sparse" manner, then this means that a fraction S
> of the neurons in MTL encode Jennifer Aniston, and the fraction is small.
>
> Now, if the fraction S is small, then the probability of Quiroga et al
> hitting some neuron inthe set, using a random probe, is also small.
>
> Agreed?
>
> Clearly, as Quiroga et al point out themselves, if the probability S is very
> small, we should be surprised if that random probe actually did find a
> Jennifer Aniston cell.
>
> So...
>
> To make the argument work, they have to suggest that the number of Jennifer
> Aniston cells is actually a very significant percentage of the total number
> of cells.  In other words, "sparse" must mean "about one in every hundred
> cells", or something like that (it's late, and I am tired, so I am not about
> to do the math, but if Quiroga et al do about a hundred probes and *one* of
> those is a JA cell, it clearly cannot be one in a million cells).
>
> Agreed?
>
> But, of that is the case, then each cell must be encoding many concepts,
> because otherwise there would not be anough cells to encode more than about
> a hundred concepts, would there?  They admit this in the paper: "each cell
> might represent more than one class of images".  But there are perhaps
> hundreds of thousands of different images that a given person can recognize,
> so in that case, each neuron must be representing (of the order of)
> thousands of images.
>
> The points that Harley and I made were:
>
> 1) In what sense is the representation "sparse" and "not distributed" if
> each neuron encodes thousands of images?  Roughly one percent of the neurons
> in the MTL are used for each concept, and each neuron represents thousands
> of other concepts:  this is just as accurate a description of a
> "distributed" representation, and it is a long way from anything that
> resembles a "grandmother cell" situation.
>
> And yet, Quiroga et al give their paper the title "Invariant visual
> representation by single neurons in the human brain".  They say SINGLE
> neurons, when what is implied is that 1% of the entire MTL (or roughly that
> number) is dedicated to representing a concept like Jennifer Aniston.  They
> seem to want to have their cake and eat it too:  they put "single neurons"
> in the title, but buried in their logic is the implication that vast numbers
> of neurons are redundantly coding for each concept.  That is an *incoherent*
> claim.
>
> 2) This entire discussion of the contrast between sparse and distributed
> representations has about it the implication that "neurons" are a unit that
> has some functional meaning, when talking about concepts.  But Harley and I
> described an example of a different (mor sophisticated) way to encode
> concepts, in which it made no sense to talk about these particular neurons
> as encoding particular concepts.  The neurons were just playing the role of
> dumb constituents in a larger structure, while the actual concepts were (in
> essence) patterns of activation that were just passing through.
>
> This alternate conception of what might be going on leads us to the
> conclusion that the distinction Quiroga et al make between "sparse" and
> "distributed" is not necessarily meaningful at all.  In our alternate
> conception, the distinction is meaningless, and the conclusion that Quiroga
> et al draw (that there is "an invariant, sparse and explicit code") is not
> valid - it is only a coherent conclusion if we buy the idea that individual
> neurons are doing some re

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

They want some kind of mixture of "sparse" and "multiply redundant" and "not
distributed".  The whole point of what we wrote was that there is no
consistent interpretation of what they tried to give as their conclusion.
 If you think there is, bring it out and put it side by side with what we
said.



There is always a consistent interpretation that drops their
interpretation altogether and leaves the data. I don't see their
interpretation as strongly asserting anything. They are just saying
the same thing in a different language you don't like or consider
meaningless, but it's a question of definitions and style, not
essence, as long as the audience of the paper doesn't get confused.



Let me spell it out carefully.

If we try to buy their suggestion that the MTL represents concepts (such 
as "Jennifer Aniston") in a "sparse" manner, then this means that a 
fraction S of the neurons in MTL encode Jennifer Aniston, and the 
fraction is small.


Now, if the fraction S is small, then the probability of Quiroga et al 
hitting some neuron inthe set, using a random probe, is also small.


Agreed?

Clearly, as Quiroga et al point out themselves, if the probability S is 
very small, we should be surprised if that random probe actually did 
find a Jennifer Aniston cell.


So...

To make the argument work, they have to suggest that the number of 
Jennifer Aniston cells is actually a very significant percentage of the 
total number of cells.  In other words, "sparse" must mean "about one in 
every hundred cells", or something like that (it's late, and I am tired, 
so I am not about to do the math, but if Quiroga et al do about a 
hundred probes and *one* of those is a JA cell, it clearly cannot be one 
in a million cells).


Agreed?

But, of that is the case, then each cell must be encoding many concepts, 
because otherwise there would not be anough cells to encode more than 
about a hundred concepts, would there?  They admit this in the paper: 
"each cell might represent more than one class of images".  But there 
are perhaps hundreds of thousands of different images that a given 
person can recognize, so in that case, each neuron must be representing 
(of the order of) thousands of images.


The points that Harley and I made were:

1) In what sense is the representation "sparse" and "not distributed" if 
each neuron encodes thousands of images?  Roughly one percent of the 
neurons in the MTL are used for each concept, and each neuron represents 
thousands of other concepts:  this is just as accurate a description of 
a "distributed" representation, and it is a long way from anything that 
resembles a "grandmother cell" situation.


And yet, Quiroga et al give their paper the title "Invariant visual 
representation by single neurons in the human brain".  They say SINGLE 
neurons, when what is implied is that 1% of the entire MTL (or roughly 
that number) is dedicated to representing a concept like Jennifer 
Aniston.  They seem to want to have their cake and eat it too:  they put 
"single neurons" in the title, but buried in their logic is the 
implication that vast numbers of neurons are redundantly coding for each 
concept.  That is an *incoherent* claim.


2) This entire discussion of the contrast between sparse and distributed 
representations has about it the implication that "neurons" are a unit 
that has some functional meaning, when talking about concepts.  But 
Harley and I described an example of a different (mor sophisticated) way 
to encode concepts, in which it made no sense to talk about these 
particular neurons as encoding particular concepts.  The neurons were 
just playing the role of dumb constituents in a larger structure, while 
the actual concepts were (in essence) patterns of activation that were 
just passing through.


This alternate conception of what might be going on leads us to the 
conclusion that the distinction Quiroga et al make between "sparse" and 
"distributed" is not necessarily meaningful at all.  In our alternate 
conception, the distinction is meaningless, and the conclusion that 
Quiroga et al draw (that there is "an invariant, sparse and explicit 
code") is not valid - it is only a coherent conclusion if we buy the 
idea that individual neurons are doing some representing of concepts.


In other words, the conclusion was incoherent in this sense also.  It 
was theory laden.




The whole mess is summed up quite well by a statement that they make:


"In the ... case [of distributed representation], recognition would 
require the simultaneous activation of a large number of cells and 
therefore we would expect each cell to respond to many pictures with 
similar basic features.  This is in contrast to the sparse firing we 
observe, because most MTL cells do not respond to the great majority of 
images seen by the patient."



But the only way to make their 'sparse" interpreta

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Steve Richfield
Bringing this back to the earlier discussion, What could be happening, not
to say that it is provably happening but there certainly is no evidence
(that I know of) against it, is the following, with probabilities
represented internally by voltages that are proportional to the logarithm of
the probability: External representation varies, e.g. spike rate for spiking
neurons

Dendritic trees could be a sort of Bayesian AND, and the neurons themselves
could be a sort of Bayesian OR of the dendrites. If each dendrite were
completely unrelated to the others, e.g. one computed some aspect of "tree",
another some aspect of "sweet", another some aspect of "angry", etc., then
the dendrites on other neurons could easily assemble whatever they needed,
with lots of other extraneous things OR'd onto the inputs. This sounds like
a mess, but it works. Consider: Any one individual thing only occurs rarely.
If not, it will be differentiated until it is rare. Additive noise on the
inputs of a Bayesian AND only affects the output when ALL of the other
inputs are non-zero. When these two rare events happen simultaneously,
whatever the dendrite is looking for and another event that adds to one of
its inputs, the output will be slightly increased. How slight? It appears
that CNS (Central Nervous System) neurons have ~50K synapses, of which ~200
have efficacies >0 at any one time. Hence, noise might contribute ~1% to the
output - too little to be concerned much about.

Why evolve such a convoluted system? Because cells are MUCH more expensive
than dendrites or synapses. By having a cell handle aspects of many
unrelated things while other cells are doing the same, and ANDing them as
needed, the cell count is minimized. Also, such systems are impervious to
minor damage, cells dying, etc.

Certainly, having a "tree" cell would only help if there were SO many uses
of exactly the same meaning of tree that it would be efficient to do all of
the ANDing in one place. However, a cell doing this could also do the same
for other unrelated things at the same time, bringing us back to the theory.
Hence, until I hear something to deny this theory, I am presuming it to be
correct.

OK, so why isn't this well known? Consider:
1.  The standards for publication of laboratory results are MUCH tighter
than in other areas. If they don't have proof, then they don't publish.
Hence, if you don't know someone who knows about CNS dendrites, you won't
even have anything to think about.
2.  As Loosemore pointed out, the guys in the lab do NOT have
skills applicable to cognitive, mathematical, or other key areas that the
very cells that they are studying are functioning in.

Flashback: I had finally tracked down an important article about observed
synaptic transfer functions and its author in person. Also present was
William Calvin, the neuroscience author who formerly had a laboratory at the
U of Washington. Looking over the functions in the article, I started to
comment on what they might be doing mathematically, whereupon the author
interjected that they had already found functions that fit very closely that
they has used as a sort of spline, which weren't anything at all like the
functions I was looking for. I noted that it appeared to me that both
functions produced almost identical results over the observed range, but
mine was derived from mathematical necessity while the ones the author used
as a spline just happened to fit well. The author then asked why even bother
looking for another function that fits after you already have one. At that
point, in exasperation, Calvin took up my side of the discussion, and after
maybe 15 minutes of discussion with the author while I sat quietly and
watched, the author FINALLY understood that these neurons do something in
the real world, and if you have a theory about what that might be, then you
must look at the difference between predicted and actual results to
confirm/deny that theory. Later when I computer-generated points to compare
with the laboratory results, they were spot-on to within measurement
accuracy.

Anyway, this seems to be a good working theory for how our wet engine works,
but it doesn't seem to provide much to help Ben, because inside a computer,
public variables don't cost thousands of times as much as a binary operator,
instead, they are actually cheaper. Hence, there is no reason to combine
unrelated things into what is equivalent to a public variable.

However, this all suggests that attention should be concentrated on
adjectives rather than nouns, adverbs instead of verbs, etc. I noticed this
when hand coding rules for Dr. Eliza - that the modifiers seemed to be much
more important than the referents.

Maybe this hint from wetware will help someone.

Steve Richfield
=
On 11/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> And we don't yet know whether "the assembly keeps reconfiguring its
> reprsentation" for conceptual knowledge ... though we kn

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

On Fri, Nov 21, 2008 at 4:44 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:

I saw the  main point of Richard's paper as being that the available
neuroscience data drastically underdetermines the nature of neural
knowledge representation ... so that drawing conclusions about neural
KR from available data involves loads of theoretical presuppositions
...

However, my view is that this is well known among neuroscientists, and
your reading of the Quiroga et al paper supports this...

You have still not answered my previous question about your claim that there
are "essentially no neuroscientists" who say that spiking patterns in single
neurons encode relationships between concepts.



I did reply to that email


Uh, that is not the case, as far as I can see.

Maybe you better check your email stream:  I can see no reply to it here.







And yet now you make another assertion about something that you think is
"well known among neuroscientists", while completely ignoring the actual
argument that Harley and I brought to bear on this issue.


I read that paper a year or two ago, I don't remember the details and don't
feel like looking them up right now, sorry... I was admittedly replying based on
a semi-dim recollection...

My recollection is that you were arguing various neuroscientists were
overinterpreting their data, and drawing cognitive conclusions from fMRI
and other data that were not really warranted by the data without loads of
other theoretical assumptions.  Sorry if this was the wrong take-away point,
but that's what I remember from it ;-)

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
On Fri, Nov 21, 2008 at 4:54 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> They want some kind of mixture of "sparse" and "multiply redundant" and "not
>> distributed".  The whole point of what we wrote was that there is no
>> consistent interpretation of what they tried to give as their conclusion.
>>  If you think there is, bring it out and put it side by side with what we
>> said.
>>
>
> There is always a consistent interpretation that drops their
> interpretation altogether and leaves the data. I don't see their
> interpretation as strongly asserting anything. They are just saying
> the same thing in a different language you don't like or consider
> meaningless, but it's a question of definitions and style, not
> essence, as long as the audience of the paper doesn't get confused.
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/


Yes, neuroscientists rarely try to give a coherent cognitive-level
theory ... that is not their job.

I think the data they are gathering is valuable, but they are probably
not going to be the ones to eventually weave it into a coherent and
detailed cognitive theory.

I got fed up with neuroscience in the 1990s after proposing a lot of
nice cognitive theories, and finding that available neuroscience data
was not adequate to verify or refute any of them.  Unfortunately, in
2008 this is still basically the case ... brain imaging tech has a
long way to go...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> They want some kind of mixture of "sparse" and "multiply redundant" and "not
> distributed".  The whole point of what we wrote was that there is no
> consistent interpretation of what they tried to give as their conclusion.
>  If you think there is, bring it out and put it side by side with what we
> said.
>

There is always a consistent interpretation that drops their
interpretation altogether and leaves the data. I don't see their
interpretation as strongly asserting anything. They are just saying
the same thing in a different language you don't like or consider
meaningless, but it's a question of definitions and style, not
essence, as long as the audience of the paper doesn't get confused.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
On Fri, Nov 21, 2008 at 4:44 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> I saw the  main point of Richard's paper as being that the available
>> neuroscience data drastically underdetermines the nature of neural
>> knowledge representation ... so that drawing conclusions about neural
>> KR from available data involves loads of theoretical presuppositions
>> ...
>>
>> However, my view is that this is well known among neuroscientists, and
>> your reading of the Quiroga et al paper supports this...
>
> You have still not answered my previous question about your claim that there
> are "essentially no neuroscientists" who say that spiking patterns in single
> neurons encode relationships between concepts.
>

I did reply to that email

> And yet now you make another assertion about something that you think is
> "well known among neuroscientists", while completely ignoring the actual
> argument that Harley and I brought to bear on this issue.

I read that paper a year or two ago, I don't remember the details and don't
feel like looking them up right now, sorry... I was admittedly replying based on
a semi-dim recollection...

My recollection is that you were arguing various neuroscientists were
overinterpreting their data, and drawing cognitive conclusions from fMRI
and other data that were not really warranted by the data without loads of
other theoretical assumptions.  Sorry if this was the wrong take-away point,
but that's what I remember from it ;-)

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

I saw the  main point of Richard's paper as being that the available
neuroscience data drastically underdetermines the nature of neural
knowledge representation ... so that drawing conclusions about neural
KR from available data involves loads of theoretical presuppositions
...

However, my view is that this is well known among neuroscientists, and
your reading of the Quiroga et al paper supports this...


You have still not answered my previous question about your claim that 
there are "essentially no neuroscientists" who say that spiking patterns 
in single neurons encode relationships between concepts.


And yet now you make another assertion about something that you think is 
"well known among neuroscientists", while completely ignoring the actual 
argument that Harley and I brought to bear on this issue.




Richard Loosemore





ben g

On Fri, Nov 21, 2008 at 1:33 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

No, object-concepts and the like.  Not place, motion or action 'concepts'.

For example, Quiroga et al showed their subjects pictures of famous places
and people, then made assertions about how those things were represented.


Now that I have a bit better understanding of neuroscience than a year
ago, I reread relevant part of your paper and skimmed the Quiroga et
al's paper ("Invariant visual representation by single neurons in the
human brain", for those who don't want to look it up in Richard's
paper). I don't see a significant disagreement. They didn't mean to
imply obviously wrong assertion that there are only few cells
corresponding to each high-level concept (to quote: "the fact that we
can discover in this short time some images -- such as photographs of
Jennifer Aniston -- that drive the cells, suggests that each cell
might represent more than one class of images"). Sparse and
distributed representations are mentioned as extreme perspectives, not
a dichtomy. Results certainly have some properties of sparse
representation, as opposed to extremely distributed, which doesn't
mean that results imply extremely sparse representation. Observed
cells as correlates of high-level concepts were surprisingly invariant
to the form in which that high-level concept was presented, which does
suggest that representation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.

So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

No, object-concepts and the like.  Not place, motion or action 'concepts'.

For example, Quiroga et al showed their subjects pictures of famous places
and people, then made assertions about how those things were represented.



Now that I have a bit better understanding of neuroscience than a year
ago, I reread relevant part of your paper and skimmed the Quiroga et
al's paper ("Invariant visual representation by single neurons in the
human brain", for those who don't want to look it up in Richard's
paper). I don't see a significant disagreement. They didn't mean to
imply obviously wrong assertion that there are only few cells
corresponding to each high-level concept (to quote: "the fact that we
can discover in this short time some images -- such as photographs of
Jennifer Aniston -- that drive the cells, suggests that each cell
might represent more than one class of images"). Sparse and
distributed representations are mentioned as extreme perspectives, not
a dichtomy. Results certainly have some properties of sparse
representation, as opposed to extremely distributed, which doesn't
mean that results imply extremely sparse representation. Observed
cells as correlates of high-level concepts were surprisingly invariant
to the form in which that high-level concept was presented, which does
suggest that representation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.

So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).



Not correct.  We covered all the possible interpretations of what they 
said.  All you have done above is to quote back their words, without 
taking into account the fact that we thought through the implications of 
what they said, and pointed out that those implications did not make any 
sense.


They want some kind of mixture of "sparse" and "multiply redundant" and 
"not distributed".  The whole point of what we wrote was that there is 
no consistent interpretation of what they tried to give as their 
conclusion.  If you think there is, bring it out and put it side by side 
with what we said.


But please, it doesn't help to just repeat back what they said, and 
declare that Harley and I were wrong.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
I saw the  main point of Richard's paper as being that the available
neuroscience data drastically underdetermines the nature of neural
knowledge representation ... so that drawing conclusions about neural
KR from available data involves loads of theoretical presuppositions
...

However, my view is that this is well known among neuroscientists, and
your reading of the Quiroga et al paper supports this...

ben g

On Fri, Nov 21, 2008 at 1:33 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> No, object-concepts and the like.  Not place, motion or action 'concepts'.
>>
>> For example, Quiroga et al showed their subjects pictures of famous places
>> and people, then made assertions about how those things were represented.
>>
>
> Now that I have a bit better understanding of neuroscience than a year
> ago, I reread relevant part of your paper and skimmed the Quiroga et
> al's paper ("Invariant visual representation by single neurons in the
> human brain", for those who don't want to look it up in Richard's
> paper). I don't see a significant disagreement. They didn't mean to
> imply obviously wrong assertion that there are only few cells
> corresponding to each high-level concept (to quote: "the fact that we
> can discover in this short time some images -- such as photographs of
> Jennifer Aniston -- that drive the cells, suggests that each cell
> might represent more than one class of images"). Sparse and
> distributed representations are mentioned as extreme perspectives, not
> a dichtomy. Results certainly have some properties of sparse
> representation, as opposed to extremely distributed, which doesn't
> mean that results imply extremely sparse representation. Observed
> cells as correlates of high-level concepts were surprisingly invariant
> to the form in which that high-level concept was presented, which does
> suggest that representation is much more explicit than in the
> extremely distributed case. Or course, it's not completely explicit.
>
> So, at this point I see at least this item in your paper as a strawman
> objection (given that I didn't revisit other items).
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects."  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> No, object-concepts and the like.  Not place, motion or action 'concepts'.
>
> For example, Quiroga et al showed their subjects pictures of famous places
> and people, then made assertions about how those things were represented.
>

Now that I have a bit better understanding of neuroscience than a year
ago, I reread relevant part of your paper and skimmed the Quiroga et
al's paper ("Invariant visual representation by single neurons in the
human brain", for those who don't want to look it up in Richard's
paper). I don't see a significant disagreement. They didn't mean to
imply obviously wrong assertion that there are only few cells
corresponding to each high-level concept (to quote: "the fact that we
can discover in this short time some images -- such as photographs of
Jennifer Aniston -- that drive the cells, suggests that each cell
might represent more than one class of images"). Sparse and
distributed representations are mentioned as extreme perspectives, not
a dichtomy. Results certainly have some properties of sparse
representation, as opposed to extremely distributed, which doesn't
mean that results imply extremely sparse representation. Observed
cells as correlates of high-level concepts were surprisingly invariant
to the form in which that high-level concept was presented, which does
suggest that representation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.

So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Charles Hixson

Ben Goertzel wrote:

 > The neuron = concept
  

'theory' is extremely broken:  it is so broken, that when neuroscientists
talk about bayesian contingencies being calculated or encoded by spike
timing mechanisms, that claim is incoherent.



This is not always true ... in some cases there are solidly demonstrated
connections between neurally computed bayesian contingencies and
observed perceptual and motor phenomena in organisms...

I agree that no one knows how abstract concepts are represented in the brain,
but for sensorimotor stuff it is not the case that work on bayesian population
coding in the brain is "incoherent"

ben g
  
Also, I believe that in at least a weak sense "grandmother neurons" have 
been located.  I.e., neuron's which when stimulated result in thoughts 
of one's "grandmother".  I don't know if this has been demonstrated 
beyond particular people, as this is a field of only minor interest to 
me.  But in that sense, also, a neuron would be a concept.


Though one should notice that the neuron wasn't a full representation of 
the grandmother in and of itself.  (Though I doubt that anyone's done 
the study of destroying a "grandmother neuron" and seeing if the memory 
of the grandmother disappeared.)


My non-specialist's model of what's happening here is that prior 
experiences have resulted in some particular neuron being sensitized so 
that it reacts to thoughts of the grandmother and also when stimulated 
causes other cells somewhat similarly trained to respond.  If there is 
sufficient overlap of...I want to say concept, but my model is really 
"zones of activation" with the footnote that these zones aren't 
necessarily, or even probably, groups of cells in physical proximity.


Now personally I don't like neural models, because they are too 
difficult to understand.  So I don't pay much attention to them.  But 
this is how I understand the "Grandmother neuron" to be created/exist.  
(And I'm really fishing for either cooberation or correction.)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 8:09 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:

Richard,

My point was that there are essentially no neuroscientists out there
who believe that concepts are represented by single neurons.  So you
are in vehement agreement with the neuroscience community on this
point.

The idea that concepts may be represented by cell assemblies, or
attractors within cell assemblies, are more prevalent.  I assume
you're familiar with the thinking/writing of for instance Walter
Freeman and Susan Greenfield on these issues.   You may consider them
wrong, but they are not wrong due to obvious errors or due to
obliviousness to cog sci data.

So let me see if I've got this straight:  you are saying that there are
essentially no neuroscientists who talk about spiking patterns in single
neurons encoding relationships between concepts?

Not low-level features, as we discussed before, but medium- to high-level
concepts?

You are saying that when they talk about the spike trains encoding bayesian
contingencies, they NEVER mean, or imply, contingencies between concepts?



What's a concept in this context, Richard? For example, place cells
activate on place fields, pretty palpable correlates, one could say
they represent concepts (and it's not a perceptual correlate). There
are relations between these concepts, prediction of their activity,
encoding of their sequences that plays role in episodic memory, and so
on. At the same time, the process by which they are computed is
largely unknown, individual cells perform some kind of transformation
on other cells, but how much of the concept is encoded in cells
themselves rather than in cells they receive input from is also
unknown. Since they jump on all kinds of contextual cues, it's likely
that their activity to some extent depends on activity in most of the
brain, but it doesn't invalidate analysis considering individual cells
or small areas of cortex, just as gravitation pull from the Mars
doesn't invalidate approximate calculations made on Earth according to
Newton's laws. I don't quite see what you are criticizing, apart from
specific examples of apparent confusion.


No, object-concepts and the like.  Not place, motion or action 'concepts'.

For example, Quiroga et al showed their subjects pictures of famous 
places and people, then made assertions about how those things were 
represented.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 8:09 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> Richard,
>>
>> My point was that there are essentially no neuroscientists out there
>> who believe that concepts are represented by single neurons.  So you
>> are in vehement agreement with the neuroscience community on this
>> point.
>>
>> The idea that concepts may be represented by cell assemblies, or
>> attractors within cell assemblies, are more prevalent.  I assume
>> you're familiar with the thinking/writing of for instance Walter
>> Freeman and Susan Greenfield on these issues.   You may consider them
>> wrong, but they are not wrong due to obvious errors or due to
>> obliviousness to cog sci data.
>
> So let me see if I've got this straight:  you are saying that there are
> essentially no neuroscientists who talk about spiking patterns in single
> neurons encoding relationships between concepts?
>
> Not low-level features, as we discussed before, but medium- to high-level
> concepts?
>
> You are saying that when they talk about the spike trains encoding bayesian
> contingencies, they NEVER mean, or imply, contingencies between concepts?
>

What's a concept in this context, Richard? For example, place cells
activate on place fields, pretty palpable correlates, one could say
they represent concepts (and it's not a perceptual correlate). There
are relations between these concepts, prediction of their activity,
encoding of their sequences that plays role in episodic memory, and so
on. At the same time, the process by which they are computed is
largely unknown, individual cells perform some kind of transformation
on other cells, but how much of the concept is encoded in cells
themselves rather than in cells they receive input from is also
unknown. Since they jump on all kinds of contextual cues, it's likely
that their activity to some extent depends on activity in most of the
brain, but it doesn't invalidate analysis considering individual cells
or small areas of cortex, just as gravitation pull from the Mars
doesn't invalidate approximate calculations made on Earth according to
Newton's laws. I don't quite see what you are criticizing, apart from
specific examples of apparent confusion.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
And we don't yet know whether "the assembly keeps reconfiguring its
reprsentation" for conceptual knowledge ... though we know it's mainly
not true for percpetual and motor knowledge...

On Fri, Nov 21, 2008 at 11:56 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben: > The idea that concepts may be represented by cell assemblies, or
>>
>> attractors within cell assemblies, are more prevalent.
>
> Ben,
>
> My question was whether the concepts - or, to be precise, the terms of the
> concepts, e.g. the sounds/ letters/word "ball" -  may not be "neuronally
> locatable" (not BTW whether they are represented by single cells). A cell
> assembly would classify as that, no? Unless the assembly keeps reconfiguring
> its representation.
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects."  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

My point was that there are essentially no neuroscientists out there
who believe that concepts are represented by single neurons.  So you
are in vehement agreement with the neuroscience community on this
point.

The idea that concepts may be represented by cell assemblies, or
attractors within cell assemblies, are more prevalent.  I assume
you're familiar with the thinking/writing of for instance Walter
Freeman and Susan Greenfield on these issues.   You may consider them
wrong, but they are not wrong due to obvious errors or due to
obliviousness to cog sci data.


So let me see if I've got this straight:  you are saying that there are 
essentially no neuroscientists who talk about spiking patterns in single 
neurons encoding relationships between concepts?


Not low-level features, as we discussed before, but medium- to 
high-level concepts?


You are saying that when they talk about the spike trains encoding 
bayesian contingencies, they NEVER mean, or imply, contingencies between 
concepts?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Mike Tintner

Ben: > The idea that concepts may be represented by cell assemblies, or

attractors within cell assemblies, are more prevalent.


Ben,

My question was whether the concepts - or, to be precise, the terms of the 
concepts, e.g. the sounds/ letters/word "ball" -  may not be "neuronally 
locatable" (not BTW whether they are represented by single cells). A cell 
assembly would classify as that, no? Unless the assembly keeps reconfiguring 
its representation. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
On Fri, Nov 21, 2008 at 11:33 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> RL:So, to clarify:  yes, it is perfectly true that the very low level
> perceptual and motor systems use simple coding techniques.  We have
> known for decades (since Hubel and Weisel) that retinal ganglion cells
> use simple coding schemes, etc etc.But the issue I was discussing was about
> the times when neuroscientists
> make statements about high level concepts and the processing of those
> concepts. ..it is very difficult to
> see how single neurons (or multiple redundant sets of neurons) could
> carry out those functions.
>
> Well, why is it so discredited?

MIke, it's just not the way the brain works ... we have enough empirical
data to know that...


>At base,  don't concepts like "tree","ball"
> etc. have to be represented by basically the same images as dealt with by
> the perceptual and motor systems - i.e. the sounds and letters of the words,
> "tree" and "ball"? Of course, each, when used, may involve a whole
> additional complex of often shifting associated concepts and images. But
> doesn't the same base image, or cluster of images, have to be used each time
> when we use those words? And couldn't that be neuronally locatable?
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects."  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Mike Tintner

RL:So, to clarify:  yes, it is perfectly true that the very low level
perceptual and motor systems use simple coding techniques.  We have
known for decades (since Hubel and Weisel) that retinal ganglion cells
use simple coding schemes, etc etc.But the issue I was discussing was about 
the times when neuroscientists

make statements about high level concepts and the processing of those
concepts. ..it is very difficult to
see how single neurons (or multiple redundant sets of neurons) could
carry out those functions.

Well, why is it so discredited? At base,  don't concepts like "tree","ball" 
etc. have to be represented by basically the same images as dealt with by 
the perceptual and motor systems - i.e. the sounds and letters of the words, 
"tree" and "ball"? Of course, each, when used, may involve a whole 
additional complex of often shifting associated concepts and images. But 
doesn't the same base image, or cluster of images, have to be used each time 
when we use those words? And couldn't that be neuronally locatable?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
Richard,

My point was that there are essentially no neuroscientists out there
who believe that concepts are represented by single neurons.  So you
are in vehement agreement with the neuroscience community on this
point.

The idea that concepts may be represented by cell assemblies, or
attractors within cell assemblies, are more prevalent.  I assume
you're familiar with the thinking/writing of for instance Walter
Freeman and Susan Greenfield on these issues.   You may consider them
wrong, but they are not wrong due to obvious errors or due to
obliviousness to cog sci data.

-- Ben G

On Fri, Nov 21, 2008 at 10:27 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Steve Richfield wrote:
>>
>> Richard,
>>
>> On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED]
>> > wrote:
>>
>>Steve Richfield wrote:
>>
>>Richard,
>> Broad agreement, with one comment from the end of your posting...
>> On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED]
>> >>> wrote:
>>
>>   Another, closely related thing that they do is talk about low
>>level
>>   issues witout realizing just how disconnected those are from
>>where
>>   the real story (probably) lies.  Thus, Mohdra emphasizes the
>>   importance of "spike timing" as opposed to average firing rate.
>>
>> There are plenty of experiments that show that consecutive
>>closely-spaced pulses result when something goes "off scale",
>>probably the equivalent to computing Bayesian probabilities >
>>100%, somewhat akin to the "overflow" light on early analog
>>computers. These closely-spaced pulses have a MUCH larger
>>post-synaptic effect than the same number of regularly spaced
>>pulses. However, as far as I know, this only occurs during
>>anomalous situations - maybe when something really new happens,
>>that might trigger learning?
>> IMHO, it is simply not possible to play this game without
>>having a close friend with years of experience poking mammalian
>>neurons. This stuff is simply NOT in the literature.
>>
>>   He may well be right that the pattern or the timing is more
>>   important, but IMO he is doing the equivalent of saying
>>"Let's talk
>>   about the best way to design an algorithm to control an airport.
>>First problem to solve:  should we use Emitter-Coupled Logic
>>in the
>>   transistors that are in oour computers that will be running the
>>   algorithms."
>>
>> Still, even with my above comments, you conclusion is still
>>correct.
>>
>>
>>The main problem is that if you interpret spike timing to be playing
>>the role that you (and they) imply above, then you are commiting
>>yourself to a whole raft of assumptions about how knowledge is
>>generally represented and processed.  However, there are *huge*
>>problems with that set of implicit assumptions  not to put too
>>fine a point on it, those implicit assumptions are equivalent to the
>>worst, most backward kind of cognitive theory imaginable.  A theory
>>that is 30 or 40 years out of date.
>>
>>  OK, so how else do you explain that in fairly well understood situations
>> like stretch receptors, that the rate indicates the stretch UNLESS you
>> exceed the mechanical limit of the associated joint, whereupon you start
>> getting pulse doublets, triplets, etc. Further, these pulse groups have a
>> HUGE effect on post synaptic neurons. What does your cognitive science tell
>> you about THAT?
>
> See my parallel reply to Ben's point:  I was talking about the fact that
> neuroscientists make these claims about high level cognition;  I was not
> referring to the cases where they try to explain low-level, sensory and
> motor periphery functions like stretch receptor neurons.
>
> So, to clarify:  yes, it is perfectly true that the very low level
> perceptual and motor systems use simple coding techniques.  We have known
> for decades (since Hubel and Weisel) that retinal ganglion cells use simple
> coding schemes, etc etc.
>
> But the issue I was discussing was about the times when neuroscientists make
> statements about high level concepts and the processing of those concepts.
>  Many decades ago people suggested that perhaps these concepts were
> represented by single neurons, but that idea was shot down very quickly, and
> over the years we have found such sophisticated information processing
> effects occurring in cognition that it is very difficult to see how single
> neurons (or multiple redundant sets of neurons) could carry out those
> functions.
>
> This idea is so discredited that it is hard to find references on the
> subject:  it has been accepted for so long that it is common knowledge in
> the cognitive sc

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


Steve Richfield wrote:

Richard,
 Broad agreement, with one comment from the end of your posting...
 On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED]
 >> wrote:

   Another, closely related thing that they do is talk about low
level
   issues witout realizing just how disconnected those are from
where
   the real story (probably) lies.  Thus, Mohdra emphasizes the
   importance of "spike timing" as opposed to average firing rate.

 There are plenty of experiments that show that consecutive
closely-spaced pulses result when something goes "off scale",
probably the equivalent to computing Bayesian probabilities >
100%, somewhat akin to the "overflow" light on early analog
computers. These closely-spaced pulses have a MUCH larger
post-synaptic effect than the same number of regularly spaced
pulses. However, as far as I know, this only occurs during
anomalous situations - maybe when something really new happens,
that might trigger learning?
 IMHO, it is simply not possible to play this game without
having a close friend with years of experience poking mammalian
neurons. This stuff is simply NOT in the literature.

   He may well be right that the pattern or the timing is more
   important, but IMO he is doing the equivalent of saying
"Let's talk
   about the best way to design an algorithm to control an airport.
First problem to solve:  should we use Emitter-Coupled Logic
in the
   transistors that are in oour computers that will be running the
   algorithms."

 Still, even with my above comments, you conclusion is still
correct.


The main problem is that if you interpret spike timing to be playing
the role that you (and they) imply above, then you are commiting
yourself to a whole raft of assumptions about how knowledge is
generally represented and processed.  However, there are *huge*
problems with that set of implicit assumptions  not to put too
fine a point on it, those implicit assumptions are equivalent to the
worst, most backward kind of cognitive theory imaginable.  A theory
that is 30 or 40 years out of date.

 
OK, so how else do you explain that in fairly well understood situations 
like stretch receptors, that the rate indicates the stretch UNLESS you 
exceed the mechanical limit of the associated joint, whereupon you start 
getting pulse doublets, triplets, etc. Further, these pulse groups have 
a HUGE effect on post synaptic neurons. What does your cognitive science 
tell you about THAT?


See my parallel reply to Ben's point:  I was talking about the fact that 
neuroscientists make these claims about high level cognition;  I was not 
referring to the cases where they try to explain low-level, sensory and 
motor periphery functions like stretch receptor neurons.


So, to clarify:  yes, it is perfectly true that the very low level 
perceptual and motor systems use simple coding techniques.  We have 
known for decades (since Hubel and Weisel) that retinal ganglion cells 
use simple coding schemes, etc etc.


But the issue I was discussing was about the times when neuroscientists 
make statements about high level concepts and the processing of those 
concepts.  Many decades ago people suggested that perhaps these concepts 
were represented by single neurons, but that idea was shot down very 
quickly, and over the years we have found such sophisticated information 
processing effects occurring in cognition that it is very difficult to 
see how single neurons (or multiple redundant sets of neurons) could 
carry out those functions.


This idea is so discredited that it is hard to find references on the 
subject:  it has been accepted for so long that it is common knowledge 
in the cognitive science community.




 


The gung-ho neuroscientists seem blissfully unaware of this fact
because  they do not know enough cognitive science. 

 
I stated a Ben's List challenge a while back that you apparently missed, 
so here it is again.
 
*You can ONLY learn how a system works by observation, to the extent 
that its operation is imperfect. Where it is perfect, it represents a 
solution to the environment in which it operates, and as such, could be 
built in countless different ways so long as it operates perfectly. 
Hence, computational delays, etc., are fair game, but observed cognition 
and behavior are NOT except to the extent that perfect cognition and 
behavior can be described, whereupon the difference between observed and 
theoretical contains the information about construction.*
** 
*A pe

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Ben Goertzel
> I stated a Ben's List challenge a while back that you apparently missed, so
> here it is again.
>
> You can ONLY learn how a system works by observation, to the extent that its
> operation is imperfect. Where it is perfect, it represents a solution to the
> environment in which it operates, and as such, could be built in countless
> different ways so long as it operates perfectly. Hence, computational
> delays, etc., are fair game, but observed cognition and behavior are NOT
> except to the extent that perfect cognition and behavior can be described,
> whereupon the difference between observed and theoretical contains the
> information about construction.

That seems mathematically wrong to me.  It seems to me that there are going to
be countless different ways in which any *error* could be produced, also...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Steve Richfield
Richard,

On 11/20/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Steve Richfield wrote:
>
>> Richard,
>>  Broad agreement, with one comment from the end of your posting...
>>  On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED] > [EMAIL PROTECTED]>> wrote:
>>
>>Another, closely related thing that they do is talk about low level
>>issues witout realizing just how disconnected those are from where
>>the real story (probably) lies.  Thus, Mohdra emphasizes the
>>importance of "spike timing" as opposed to average firing rate.
>>
>>  There are plenty of experiments that show that consecutive closely-spaced
>> pulses result when something goes "off scale", probably the equivalent to
>> computing Bayesian probabilities > 100%, somewhat akin to the "overflow"
>> light on early analog computers. These closely-spaced pulses have a MUCH
>> larger post-synaptic effect than the same number of regularly spaced pulses.
>> However, as far as I know, this only occurs during anomalous situations -
>> maybe when something really new happens, that might trigger learning?
>>  IMHO, it is simply not possible to play this game without having a close
>> friend with years of experience poking mammalian neurons. This stuff is
>> simply NOT in the literature.
>>
>>He may well be right that the pattern or the timing is more
>>important, but IMO he is doing the equivalent of saying "Let's talk
>>about the best way to design an algorithm to control an airport.
>> First problem to solve:  should we use Emitter-Coupled Logic in the
>>transistors that are in oour computers that will be running the
>>algorithms."
>>
>>  Still, even with my above comments, you conclusion is still correct.
>>
>
> The main problem is that if you interpret spike timing to be playing the
> role that you (and they) imply above, then you are commiting yourself to a
> whole raft of assumptions about how knowledge is generally represented and
> processed.  However, there are *huge* problems with that set of implicit
> assumptions  not to put too fine a point on it, those implicit
> assumptions are equivalent to the worst, most backward kind of cognitive
> theory imaginable.  A theory that is 30 or 40 years out of date.


OK, so how else do you explain that in fairly well understood situations
like stretch receptors, that the rate indicates the stretch UNLESS you
exceed the mechanical limit of the associated joint, whereupon you start
getting pulse doublets, triplets, etc. Further, these pulse groups have a
HUGE effect on post synaptic neurons. What does your cognitive science tell
you about THAT?



> The gung-ho neuroscientists seem blissfully unaware of this fact because
>  they do not know enough cognitive science.


I stated a Ben's List challenge a while back that you apparently missed, so
here it is again.

*You can ONLY learn how a system works by observation, to the extent that
its operation is imperfect. Where it is perfect, it represents a solution to
the environment in which it operates, and as such, could be built in
countless different ways so long as it operates perfectly. Hence,
computational delays, etc., are fair game, but observed cognition and
behavior are NOT except to the extent that perfect cognition and behavior
can be described, whereupon the difference between observed and theoretical
contains the information about construction.*
**
*A perfect example of this is superstitious learning, which on its
surface appears to be an imperfection. However, we must use incomplete data
to make imperfect predictions if we are to ever interact with our
environment, so superstitious learning is theoretically unavoidable. Trying
to compute what is "perfect" for superstitious learning is a pretty
challenging task, as it involves factors like the regularity of disastrous
events throughout evolution, etc.*

If anyone has successfully done this, I would be very interested. This is
because of my interest in central metabolic control issues, wherein
superstitious "red tagging" appears to be central to SO many age-related
conditions. Now, I am blindly assuming perfection in neural computation
and proceeding on that assumption. However, if I could recognize and
understand any imperfections (none are known), I might be able to save
(another) life or two along the way with that knowledge.

Anyway, this suggests that much of cognitive "science", which has NOT
computed this difference but rather is running with the "raw data" of
observation, is rather questionable at best. For reasons such as this, I
(perhaps prematurely and/or improperly) dismissed cognitive science rather
early on. Was I in error to do so?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://ww

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 5:14 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Lastly, I did not say that the neuroscientists picked old, broken theories
> AND that they could have picked a better, not-broken theory  I only said
> that they have gone back to old theories that are known to be broken.
>  Whether anyone has a good replacement yet is not relevant:  it does not
> alter the fact that they are using broken theories.  The neuron = concept
> 'theory' is extremely broken:  it is so broken, that when neuroscientists
> talk about bayesian contingencies being calculated or encoded by spike
> timing mechanisms, that claim is incoherent.
>

Well, you know I read that paper ;-)
"A theory that is 30 or 40 years out of date", you said -- which
suggested something that is up to date, hence the question.

Neural code can be studied from the areas where we know the
correlates. You could assign concepts to neurons and theorize about
their structure as dictated by dynamic of neural substrate. They will
be no word-level concepts, and you'd probably need to build bigger
abstractions on top, but there is no inherent problem with that.
Still, it's so murky even for simple correlates that no good overall
picture exists.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Ben Goertzel wrote:

 > The neuron = concept

'theory' is extremely broken:  it is so broken, that when neuroscientists
talk about bayesian contingencies being calculated or encoded by spike
timing mechanisms, that claim is incoherent.


This is not always true ... in some cases there are solidly demonstrated
connections between neurally computed bayesian contingencies and
observed perceptual and motor phenomena in organisms...

I agree that no one knows how abstract concepts are represented in the brain,
but for sensorimotor stuff it is not the case that work on bayesian population
coding in the brain is "incoherent"


No contest:  it is valid there.

But I am only referring to the cases where neuroscientists imply that 
what they are talking about are higher level concepts.


This happens extremely frequently.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Ben Goertzel
 > The neuron = concept
> 'theory' is extremely broken:  it is so broken, that when neuroscientists
> talk about bayesian contingencies being calculated or encoded by spike
> timing mechanisms, that claim is incoherent.

This is not always true ... in some cases there are solidly demonstrated
connections between neurally computed bayesian contingencies and
observed perceptual and motor phenomena in organisms...

I agree that no one knows how abstract concepts are represented in the brain,
but for sensorimotor stuff it is not the case that work on bayesian population
coding in the brain is "incoherent"

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Vladimir Nesov wrote:

Referencing your own work is obviously not what I was asking for.
Still, something more substantial than "neuron is not a concept", as
an example of "cognitive theory"?


I don't understand your objection here:  I referenced my own work 
because I specifically described several answers to your question that 
were written down in that paper.  And I brougt one of them out and 
summarized it for you.  Why would that be "obviously not what I was 
asking for"?  I am confused.


That paper was partly about my own theory, but partly about the general 
problem of neuroscience models making naive assumptions about cognitive 
theories in general.


And why do you say that you want something more substantial than "neuron 
is not a concept"  that is an extremely serious issue.  Why do you 
dismiss it as insubstantial?


Lastly, I did not say that the neuroscientists picked old, broken 
theories AND that they could have picked a better, not-broken theory 
 I only said that they have gone back to old theories that are known 
to be broken.  Whether anyone has a good replacement yet is not 
relevant:  it does not alter the fact that they are using broken 
theories.  The neuron = concept 'theory' is extremely broken:  it is so 
broken, that when neuroscientists talk about bayesian contingencies 
being calculated or encoded by spike timing mechanisms, that claim is 
incoherent.


If you really insist on another example, take one of the other ones that 
I mentioned in the paper:  the naive identification of attentional 
limitations with a literal "bottleneck" in processing.


I may as well jsut quote you the entire passage that we wrote on the 
matter.  (There are no references to the basic facts about dual-task 
studies, it is true.  Is it really necessary for me to dig those up, or 
do you know them already?):


QUOTE from Loosemore & Harley---

Dux, Ivanoff, Asplund and Marois (2006) describe a study in which 
participants were asked to carry out two tasks that were too hard to 
perform simultaneously. In these circumstances, we would expect (from a 
wide range of previous cognitive psychological studies) that the tasks 
would be serially queued, and that this would show up in reaction time 
data. Some theories of this effect interpret it as a consequence of a 
modality-independent “central bottleneck” in task performance.
Dux et al. used time-resolved fMRI to show that activity in a particular 
brain area—the posterior lateral prefrontal cortex (pLPFC)—was 
consistent with the queuing behavior that would be expected if this 
place were the locus of the bottleneck responsible for the brain’s 
failure to execute the tasks simultaneously. They also showed that the 
strength of the response in the pLPFC seemed to be a function of the 
difficulty of one of the competing tasks, when, in a separate 
experiment, participants were required to do that task alone. The 
conclusion drawn by Dux et al. is that this brain imaging data tells us 
the location of the bottleneck: it’s in the pLPFC. So this study aspires 
to be Level 2, perhaps even Level 3: telling us the absolute location of 
an important psychological process, perhaps telling us how it relates to 
other psychological processes.
Rather than immediately address the question of whether the pLPFC really 
is the bottleneck, we would first like to ask whether such a thing as 
“the bottleneck” exists at all. Should the psychological theory of a 
bottleneck be taken so literally that we can start looking for it in the 
brain? And if we have doubts, could imaging data help us to decide that 
we are justified in taking the idea of a bottleneck literally?

What is a “Bottleneck”?
Let’s start with a simple interpretation of the bottleneck idea. We 
start with mainstream ideas about cognition, leaving aside our new 
framework for the moment. There are tasks to be done by the cognitive 
system, and each task is some kind of package of information that goes 
to a place in the system and gets itself executed. This leads to a clean 
theoretical picture: the task is a package moving around the system, and 
there is a particular place where it can be executed. As a general rule, 
the “place” has room for more than one package (perhaps), but only if 
the packages are small, or if the packages have been compiled to make 
them automatic. In this study, though, the packages (tasks) are so big 
that there is only room for one at a time.
The difference between this only-room-for-one-package idea and its main 
rival within conventional cognitive psychology is that the rival theory 
would allow multiple packages to be executed simultaneously, but with a 
slowdown in execution speed. Unfortunately for this rival theory, 
psychology experiments have indicated that no effort is initially 
expended on a task that arrives later, until the first task is 
completed. Hence, the bottleneck theory is accepted as the best 
description of what happens in dual-ta

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
Referencing your own work is obviously not what I was asking for.
Still, something more substantial than "neuron is not a concept", as
an example of "cognitive theory"?


On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Vladimir Nesov wrote:
>>
>> Could you give some references to be specific in what you mean?
>> Examples of what you consider outdated cognitive theory and better
>> cognitive theory.
>>
>
> Well, you could start with the question of what the neurons are supposed to
> represent, if the spikes are coding (e.g.) bayesian contingencies. Are the
> neurons the same as concepts/symbols?  Are groups of neurons redundantly
> coding for concepts/symbols?
>
> One or other of these possibilties is usually assumed by default, but this
> leads to glaring inconsistencies in the interpretation of neuroscience data,
> as well as begging all of the old questions about how "grandmother cells"
> are supposed to do their job.  As I said above, cognitive scientists already
> came to the conclusion, 30 or 40 years ago, that it made no sense to stick
> to a simple identification of one neuron per concept.  And yet many
> neuroscientists are *implictly* resurrecting this broken idea, without
> addressing the faults that were previously found in it.  (In case you are
> not familiar with the faults, they include the vulnerability of neurons, the
> lack of connectivity between arbitrary neurons, the problem of assigning
> neurons to concepts, the encoding of variables, relationships and negative
> facts .. ).
>
> For example, in Loosemore & Harley (in press) you can find an analysis of a
> paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
> try to claim they have evidence in favor of grandmother neurons (or sparse
> collections of grandmother neurons) and against the idea of distributed
> representations.
>
> We showed their conclusion to be incoherent.  It was deeply implausible,
> given the empirical data they reported.
>
> Furthermore, we used my molecular framework (the same one that was outlined
> in the consciousness paper) to see how that would explain the same data.  It
> turns out that this much more sophisticated model was very consistent with
> the data (indeed, it is the only one I know of that can explain the results
> they got).
>
> You can find our paper at www.susaro.com/publications.
>
>
>
> Richard Loosemore
>
>
> Loosemore, R.P.W. & Harley, T.A. (in press). Brains and Minds:  On the
> Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl & S.J.
> Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, MA: MIT
> Press.
>
> Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C. & Fried, I. (2005).
> Invariant visual representation by single-neurons in the human brain.
> Nature, 435, 1102-1107.
>



-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Trent Waddington wrote:

On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Since such luminaries as Jerry Fodor have said much the same thing, I think
I stand in fairly solid company.


Wow, you said Fodor without being critical of his work.  Is that legal?

Trent


Arrrggghhh... you noticed!  :-(

I was hoping nobody would catch me out on that one.

Okay, so Fodor and I disagree about everything else.

But that's not the point :-).  He's a Heavy, so if he is on my side on 
this one issue, its okay to quote him.  (That's my story and I'm 
sticking to it.)






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge is generally represented and
processed.  However, there are *huge* problems with that set of implicit
assumptions  not to put too fine a point on it, those implicit
assumptions are equivalent to the worst, most backward kind of cognitive
theory imaginable.  A theory that is 30 or 40 years out of date.



Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.



Well, you could start with the question of what the neurons are supposed 
to represent, if the spikes are coding (e.g.) bayesian contingencies. 
Are the neurons the same as concepts/symbols?  Are groups of neurons 
redundantly coding for concepts/symbols?


One or other of these possibilties is usually assumed by default, but 
this leads to glaring inconsistencies in the interpretation of 
neuroscience data, as well as begging all of the old questions about how 
"grandmother cells" are supposed to do their job.  As I said above, 
cognitive scientists already came to the conclusion, 30 or 40 years ago, 
that it made no sense to stick to a simple identification of one neuron 
per concept.  And yet many neuroscientists are *implictly* resurrecting 
this broken idea, without addressing the faults that were previously 
found in it.  (In case you are not familiar with the faults, they 
include the vulnerability of neurons, the lack of connectivity between 
arbitrary neurons, the problem of assigning neurons to concepts, the 
encoding of variables, relationships and negative facts .. ).


For example, in Loosemore & Harley (in press) you can find an analysis 
of a paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which 
the latter try to claim they have evidence in favor of grandmother 
neurons (or sparse collections of grandmother neurons) and against the 
idea of distributed representations.


We showed their conclusion to be incoherent.  It was deeply implausible, 
given the empirical data they reported.


Furthermore, we used my molecular framework (the same one that was 
outlined in the consciousness paper) to see how that would explain the 
same data.  It turns out that this much more sophisticated model was 
very consistent with the data (indeed, it is the only one I know of that 
can explain the results they got).


You can find our paper at www.susaro.com/publications.



Richard Loosemore


Loosemore, R.P.W. & Harley, T.A. (in press). Brains and Minds:  On the 
Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl & 
S.J. Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, 
MA: MIT Press.


Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C. & Fried, I. (2005). 
Invariant visual representation by single-neurons in the human brain. 
Nature, 435, 1102-1107.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Trent Waddington
On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Since such luminaries as Jerry Fodor have said much the same thing, I think
> I stand in fairly solid company.

Wow, you said Fodor without being critical of his work.  Is that legal?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,


The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge is generally represented and
processed.  However, there are *huge* problems with that set of implicit
assumptions  not to put too fine a point on it, those implicit
assumptions are equivalent to the worst, most backward kind of cognitive
theory imaginable.  A theory that is 30 or 40 years out of date.

The gung-ho neuroscientists seem blissfully unaware of this fact because
 they do not know enough cognitive science.

Richard Loosemore



I don't think this is the reason.  There are plenty of neuroscientists
out there
who know plenty of cognitive science.

I think many neuroscientists just hold different theoretical
presuppositions than
you, for reasons other than ignorance of cog sci data.

Interdisciplinary cog sci has been around a long time now as you know ... it's
not as though cognitive neuroscientists are unaware of its data and ideas...


I disagree.

Trevor Harley wrote one very influential paper on the subject, and he 
and I wrote a second paper in which we took a random sampling of 
neuroscience papers and analyzed them carefully.  We found it trivially 
easy to gather data to illustrate our point.  And, no, even though I 
used my own framework as a point of reference, this was not crucial to 
the argument, merely a way of bringing the argument into sharp focus.


So I am basing my conclusion on gathering actual evidence and publishing 
a paper about it.


Since such luminaries as Jerry Fodor have said much the same thing, I 
think I stand in fairly solid company.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> The main problem is that if you interpret spike timing to be playing the
> role that you (and they) imply above, then you are commiting yourself to a
> whole raft of assumptions about how knowledge is generally represented and
> processed.  However, there are *huge* problems with that set of implicit
> assumptions  not to put too fine a point on it, those implicit
> assumptions are equivalent to the worst, most backward kind of cognitive
> theory imaginable.  A theory that is 30 or 40 years out of date.
>

Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 2:03 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> The main problem is that if you interpret spike timing to be playing the
>> role that you (and they) imply above, then you are commiting yourself to a
>> whole raft of assumptions about how knowledge is generally represented and
>> processed.  However, there are *huge* problems with that set of implicit
>> assumptions  not to put too fine a point on it, those implicit
>> assumptions are equivalent to the worst, most backward kind of cognitive
>> theory imaginable.  A theory that is 30 or 40 years out of date.
>>
>
> Could you give some references to be specific in what you mean?
> Examples of what you consider outdated cognitive theory and better
> cognitive theory.
>

(Assuming you didn't mean mainstream cognitive science generally, as a
framework from which to look at the problem.)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Steve Richfield wrote:

Richard,
 
Broad agreement, with one comment from the end of your posting...
 
On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


Another, closely related thing that they do is talk about low level
issues witout realizing just how disconnected those are from where
the real story (probably) lies.  Thus, Mohdra emphasizes the
importance of "spike timing" as opposed to average firing rate.

 
There are plenty of experiments that show that consecutive 
closely-spaced pulses result when something goes "off scale", probably 
the equivalent to computing Bayesian probabilities > 100%, somewhat akin 
to the "overflow" light on early analog computers. These closely-spaced 
pulses have a MUCH larger post-synaptic effect than the same number of 
regularly spaced pulses. However, as far as I know, this only occurs 
during anomalous situations - maybe when something really new happens, 
that might trigger learning?
 
IMHO, it is simply not possible to play this game without having a close 
friend with years of experience poking mammalian neurons. This stuff is 
simply NOT in the literature.


He may well be right that the pattern or the timing is more
important, but IMO he is doing the equivalent of saying "Let's talk
about the best way to design an algorithm to control an airport.
 First problem to solve:  should we use Emitter-Coupled Logic in the
transistors that are in oour computers that will be running the
algorithms."

 
Still, even with my above comments, you conclusion is still correct.


The main problem is that if you interpret spike timing to be playing 
the role that you (and they) imply above, then you are commiting 
yourself to a whole raft of assumptions about how knowledge is generally 
represented and processed.  However, there are *huge* problems with that 
set of implicit assumptions  not to put too fine a point on it, 
those implicit assumptions are equivalent to the worst, most backward 
kind of cognitive theory imaginable.  A theory that is 30 or 40 years 
out of date.


The gung-ho neuroscientists seem blissfully unaware of this fact because 
 they do not know enough cognitive science.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Ben Goertzel
Richard,

> The main problem is that if you interpret spike timing to be playing the
> role that you (and they) imply above, then you are commiting yourself to a
> whole raft of assumptions about how knowledge is generally represented and
> processed.  However, there are *huge* problems with that set of implicit
> assumptions  not to put too fine a point on it, those implicit
> assumptions are equivalent to the worst, most backward kind of cognitive
> theory imaginable.  A theory that is 30 or 40 years out of date.
>
> The gung-ho neuroscientists seem blissfully unaware of this fact because
>  they do not know enough cognitive science.
>
> Richard Loosemore


I don't think this is the reason.  There are plenty of neuroscientists
out there
who know plenty of cognitive science.

I think many neuroscientists just hold different theoretical
presuppositions than
you, for reasons other than ignorance of cog sci data.

Interdisciplinary cog sci has been around a long time now as you know ... it's
not as though cognitive neuroscientists are unaware of its data and ideas...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Steve Richfield
Richard,

Broad agreement, with one comment from the end of your posting...

On 11/20/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Another, closely related thing that they do is talk about low level issues
> witout realizing just how disconnected those are from where the real story
> (probably) lies.  Thus, Mohdra emphasizes the importance of "spike timing"
> as opposed to average firing rate.


There are plenty of experiments that show that consecutive closely-spaced
pulses result when something goes "off scale", probably the equivalent to
computing Bayesian probabilities > 100%, somewhat akin to the "overflow"
light on early analog computers. These closely-spaced pulses have a MUCH
larger post-synaptic effect than the same number of regularly spaced pulses.
However, as far as I know, this only occurs during anomalous situations -
maybe when something really new happens, that might trigger learning?

IMHO, it is simply not possible to play this game without having a close
friend with years of experience poking mammalian neurons. This stuff is
simply NOT in the literature.

He may well be right that the pattern or the timing is more important, but
> IMO he is doing the equivalent of saying "Let's talk about the best way to
> design an algorithm to control an airport.  First problem to solve:  should
> we use Emitter-Coupled Logic in the transistors that are in oour computers
> that will be running the algorithms."


Still, even with my above comments, you conclusion is still correct.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Pei Wang wrote:

Derek,

I have no doubt that their proposal contains interesting ideas and
will produce interesting and valuable results --- most AI projects do,
though the results and the values are often not what they targeted (or
they claimed to be targeting) initially.

"Biologically inspired approaches" are attractive, partly because they
have existing proof for the mechanism to work. However, we need to
remember that "inspired" by a working solution is one thing, and to
treat that solution as the best way to achieve a goal is another.
Furthermore, the difficult part in these approaches is to separate the
aspect of the biological mechanism/process that should be duplicated
from the aspects that shouldn't.


I share your concerns about this project, although I might have a 
slightly different set of reasons for being doubtful.


I watched part of one of the workshops that Mohdra chaired, on Cognitive 
Computing, and it gave me the same feeling that neuroscience gatherings 
always give me:  a lot of talk about neural hardware, punctuated by 
sudden, out-of-the-blue statements about "cognitive" ideas that seem 
completely unrelated to the ocean of neural talk that comes before and 
after.


There is a *depresssingly* long history of people doing this - and not 
just in neuroscience, but in many branches of engineering, in physics, 
in computer science, etc.  There are people out there who know that the 
mind is the new frontier, and they want to be in the party.  They also 
know that the cognitive scientists (in the broad sense) are probably the 
folks who are at the center of the party (in the sense of having most 
comprehensive knowledge).  So these people do what they do best, but add 
in a sprinkling of technical terms and (to be fair) some actual 
knowledge of some chunks of cognitive science.


Problem is, that to a cognitive scientist what they are doing is 
amateurish.


Another, closely related thing that they do is talk about low level 
issues witout realizing just how disconnected those are from where the 
real story (probably) lies.  Thus, Mohdra emphasizes the importance of 
"spike timing" as opposed to average firing rate.  He may well be right 
that the pattern or the timing is more important, but IMO he is doing 
the equivalent of saying "Let's talk about the best way to design an 
algorithm to control an airport.  First problem to solve:  should we use 
Emitter-Coupled Logic in the transistors that are in oour computers that 
will be running the algorithms."


>-|




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Pei Wang
Derek,

I have no doubt that their proposal contains interesting ideas and
will produce interesting and valuable results --- most AI projects do,
though the results and the values are often not what they targeted (or
they claimed to be targeting) initially.

"Biologically inspired approaches" are attractive, partly because they
have existing proof for the mechanism to work. However, we need to
remember that "inspired" by a working solution is one thing, and to
treat that solution as the best way to achieve a goal is another.
Furthermore, the difficult part in these approaches is to separate the
aspect of the biological mechanism/process that should be duplicated
from the aspects that shouldn't.

Yes, maybe I should market NARS as a theory of the brain, just a very
high-level one. ;-)

Pei

On Thu, Nov 20, 2008 at 10:06 AM, Derek Zahn <[EMAIL PROTECTED]> wrote:
> Pei Wang:
>
>> --- I have problem with each of these assumptions and beliefs, though
>> I don't think anyone can convince someone who just get a big grant
>> that they are moving in a wrong direction. ;-)
>
> With his other posts about the Singularity Summit and his invention of the
> word "Synaptronics", Modha certainly seems to be a kindred spirit to many on
> this list.
>
> I think what he's trying to do with this project (to the extent I understand
> it) seems like a reasonably promising approach (not really to AGI as such,
> but experimenting with soft computing substrates is kind of a cool
> enterprise to me).  Let a thousand flowers bloom.
>
> However, when he says things on his blog like "In my opinion, there are
> three reasons why the time is now ripe to begin to draw inspiration from
> structure, dynamics, function, and behavior of the brain for developing
> novel computing architectures and cognitive systems." -- I despair again.
>
> Dr. Wang, if you want to get some funding maybe you should start promoting
> NARS as a theory of the brain :)
>
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Derek Zahn

Pei Wang:> --- I have problem with each of these assumptions and beliefs, 
though> I don't think anyone can convince someone who just get a big grant> 
that they are moving in a wrong direction. ;-)
With his other posts about the Singularity Summit and his invention of the word 
"Synaptronics", Modha certainly seems to be a kindred spirit to many on this 
list.
 
I think what he's trying to do with this project (to the extent I understand 
it) seems like a reasonably promising approach (not really to AGI as such, but 
experimenting with soft computing substrates is kind of a cool enterprise to 
me).  Let a thousand flowers bloom.
 
However, when he says things on his blog like "In my opinion, there are three 
reasons why the time is now ripe to begin to draw inspiration from structure, 
dynamics, function, and behavior of the brain for developing novel computing 
architectures and cognitive systems." -- I despair again.
 
Dr. Wang, if you want to get some funding maybe you should start promoting NARS 
as a theory of the brain :)
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Pei Wang
The basic assumptions behind the project, from the webpage of its team
lead at http://www.modha.org/ :

"The mind arises from the wetware of the brain. Thus, it would seem
that reverse engineering the computational function of the brain is
perhaps the cheapest and quickest way to engineer computers that mimic
the robustness and versatility of the mind.

"Cognitive computing, seeks to engineer holistic intelligent machines
that neatly tie together all of the pieces. Cognitive computing seeks
to uncover the core micro and macro circuits of the brain underlying a
wide variety of abilities. So, it aims to proceeds in algorithm-first,
problems-later fashion.

"I believe that spiking computation is a key to achieving this vision."

--- I have problem with each of these assumptions and beliefs, though
I don't think anyone can convince someone who just get a big grant
that they are moving in a wrong direction. ;-)

Pei

On Thu, Nov 20, 2008 at 8:29 AM, Rafael C.P. <[EMAIL PROTECTED]> wrote:
> http://bits.blogs.nytimes.com/2008/11/20/hunting-for-a-brainy-computer/
>
> ===[ Rafael C.P. ]===
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com