Vladimir Nesov wrote:
Referencing your own work is obviously not what I was asking for.
Still, something more substantial than "neuron is not a concept", as
an example of "cognitive theory"?
I don't understand your objection here: I referenced my own work
because I specifically described several answers to your question that
were written down in that paper. And I brougt one of them out and
summarized it for you. Why would that be "obviously not what I was
asking for"? I am confused.
That paper was partly about my own theory, but partly about the general
problem of neuroscience models making naive assumptions about cognitive
theories in general.
And why do you say that you want something more substantial than "neuron
is not a concept" .... that is an extremely serious issue. Why do you
dismiss it as insubstantial?
Lastly, I did not say that the neuroscientists picked old, broken
theories AND that they could have picked a better, not-broken theory
.... I only said that they have gone back to old theories that are known
to be broken. Whether anyone has a good replacement yet is not
relevant: it does not alter the fact that they are using broken
theories. The neuron = concept 'theory' is extremely broken: it is so
broken, that when neuroscientists talk about bayesian contingencies
being calculated or encoded by spike timing mechanisms, that claim is
incoherent.
If you really insist on another example, take one of the other ones that
I mentioned in the paper: the naive identification of attentional
limitations with a literal "bottleneck" in processing.
I may as well jsut quote you the entire passage that we wrote on the
matter. (There are no references to the basic facts about dual-task
studies, it is true. Is it really necessary for me to dig those up, or
do you know them already?):
QUOTE from Loosemore & Harley-------------------
Dux, Ivanoff, Asplund and Marois (2006) describe a study in which
participants were asked to carry out two tasks that were too hard to
perform simultaneously. In these circumstances, we would expect (from a
wide range of previous cognitive psychological studies) that the tasks
would be serially queued, and that this would show up in reaction time
data. Some theories of this effect interpret it as a consequence of a
modality-independent “central bottleneck” in task performance.
Dux et al. used time-resolved fMRI to show that activity in a particular
brain area—the posterior lateral prefrontal cortex (pLPFC)—was
consistent with the queuing behavior that would be expected if this
place were the locus of the bottleneck responsible for the brain’s
failure to execute the tasks simultaneously. They also showed that the
strength of the response in the pLPFC seemed to be a function of the
difficulty of one of the competing tasks, when, in a separate
experiment, participants were required to do that task alone. The
conclusion drawn by Dux et al. is that this brain imaging data tells us
the location of the bottleneck: it’s in the pLPFC. So this study aspires
to be Level 2, perhaps even Level 3: telling us the absolute location of
an important psychological process, perhaps telling us how it relates to
other psychological processes.
Rather than immediately address the question of whether the pLPFC really
is the bottleneck, we would first like to ask whether such a thing as
“the bottleneck” exists at all. Should the psychological theory of a
bottleneck be taken so literally that we can start looking for it in the
brain? And if we have doubts, could imaging data help us to decide that
we are justified in taking the idea of a bottleneck literally?
What is a “Bottleneck”?
Let’s start with a simple interpretation of the bottleneck idea. We
start with mainstream ideas about cognition, leaving aside our new
framework for the moment. There are tasks to be done by the cognitive
system, and each task is some kind of package of information that goes
to a place in the system and gets itself executed. This leads to a clean
theoretical picture: the task is a package moving around the system, and
there is a particular place where it can be executed. As a general rule,
the “place” has room for more than one package (perhaps), but only if
the packages are small, or if the packages have been compiled to make
them automatic. In this study, though, the packages (tasks) are so big
that there is only room for one at a time.
The difference between this only-room-for-one-package idea and its main
rival within conventional cognitive psychology is that the rival theory
would allow multiple packages to be executed simultaneously, but with a
slowdown in execution speed. Unfortunately for this rival theory,
psychology experiments have indicated that no effort is initially
expended on a task that arrives later, until the first task is
completed. Hence, the bottleneck theory is accepted as the best
description of what happens in dual-task studies.
Theory as Metaphor
This pattern of theorizing—first a candidate mechanism, then a rival
mechanism that is noticeably different, then some experiments to tell us
which is better—is the bread and butter of cognitive science. However,
it is one thing to decide between two candidate mechanisms that are
sketched in the vaguest of terms (with just enough specificity to allow
the two candidates to be distinguished), and making a categorical
statement about the precise nature of the mechanism. To be blunt, very
few cognitive psychologists would intend the idea of packages drifting
through a system and encountering places where there is only room for
one, to be taken that literally.
On a scale from “metaphor” at one end to “mechanism blueprint” at the
other, the idea of a bottleneck is surely nearer to the metaphor end.
How many cognitive theorists would say that they are trying to pin down
the mechanisms of cognition so precisely that every one of the
subsidiary assumptions involved in a theory are supposed to be taken
exactly as they come? In the case of the bottleneck theory, for
instance, the task packages look suspiciously like symbols being
processed by a symbol system, in old-fashioned symbolic-cognition style:
but does that mean that connectionist implementations are being
explicitly ruled out by the theory? Does the theory buy into all of the
explicit representation issues involved in symbol processing, where the
semantics of a task package is entirely contained within the package
itself, rather than distributed in the surrounding machinery? These and
many other questions are begged by the idea of task packages moving
around a system and encountering a bottleneck, but would theorists who
align themselves with the bottleneck theory want to say that all of
these other aspects must be taken literally?
We think not. In fact, it seems more reasonable to suppose that the
present state of cognitive psychology involves the search for
metaphor-like ideas that are described as if they were true mechanisms,
but which should not be taken literally by anyone, and especially not by
anyone with a brain imaging device who wants to locate those mechanisms
in the brain.
ENDQUOTE-----------------
Richard Loosemore
On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Vladimir Nesov wrote:
Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.
Well, you could start with the question of what the neurons are supposed to
represent, if the spikes are coding (e.g.) bayesian contingencies. Are the
neurons the same as concepts/symbols? Are groups of neurons redundantly
coding for concepts/symbols?
One or other of these possibilties is usually assumed by default, but this
leads to glaring inconsistencies in the interpretation of neuroscience data,
as well as begging all of the old questions about how "grandmother cells"
are supposed to do their job. As I said above, cognitive scientists already
came to the conclusion, 30 or 40 years ago, that it made no sense to stick
to a simple identification of one neuron per concept. And yet many
neuroscientists are *implictly* resurrecting this broken idea, without
addressing the faults that were previously found in it. (In case you are
not familiar with the faults, they include the vulnerability of neurons, the
lack of connectivity between arbitrary neurons, the problem of assigning
neurons to concepts, the encoding of variables, relationships and negative
facts ...... ).
For example, in Loosemore & Harley (in press) you can find an analysis of a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.
We showed their conclusion to be incoherent. It was deeply implausible,
given the empirical data they reported.
Furthermore, we used my molecular framework (the same one that was outlined
in the consciousness paper) to see how that would explain the same data. It
turns out that this much more sophisticated model was very consistent with
the data (indeed, it is the only one I know of that can explain the results
they got).
You can find our paper at www.susaro.com/publications.
Richard Loosemore
Loosemore, R.P.W. & Harley, T.A. (in press). Brains and Minds: On the
Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl & S.J.
Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, MA: MIT
Press.
Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C. & Fried, I. (2005).
Invariant visual representation by single-neurons in the human brain.
Nature, 435, 1102-1107.
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com