Edward W. Porter wrote:
[snip]
There is a very interest paper at
http://www.icsuci.edu/~granger/RHGenginesJ1s.pdf
<http://www.ics.uci.edu/~granger/RHGenginesJ1s.pdf> that I have referred
to before on this list that states the cortico-thalmic feedback loop
functions to serialize the brain's activated feature set, to as to
broadcast the currently activated features to other areas of the brain
in what is in effect a serail grammer, and that associations are learned
across the multiple time delays between the concepts sequentially
broadcast in such statements, which I presume would operate at a gama
wave freqency of about 30 to 40 concept broadcasts a second. So it
might be possible learning could operate with the time delays necessary
for correllated actovations of nodes A and B to be be detected through
multi-hop connections. It is clear that short term (and even long term)
memory lets us detect correllations that are not within a 50th of a
second of each other.
Edward
If I were you, I would not get too excited about this paper, nor others
of this sort (see, e.g. Granger's other general brain-engineering paper
at http://www.dartmouth.edu/~rhg/pubs/RHGai50.pdf).
This kind of research comes pretty close to something that deserves to
be called "bogus neuroscience" -- very dense publication, full of
neuroanatomic detail, with occasional assertions that a particular
circuit or brain structure corresponds to a cognitive function. Only
problem: the statements about neuroanatomy are at the [Experienced
Researcher] level, while the statements about cognitive functions are at
the [First Year Psychology Student Who Took One Class In Cog Psy And
Thinks They Know Everything] level.
The statements about cognitive functions are embarrassing in their naivete.
Apart from anything else, no recognition whatsoever is given of issues
that crop up when you assume a system works by simply building simple
feature recognizers. How does it cope with the instance/generic
distinction? How does it allow top-down processes to operate in the
recognition process? How are relationships between instances encoded?
How are relationships abstracted? How does position-independent
recognition occur? What about the main issue that usually devastates
any behaviorist-type proposal: patterns to be associated with other
patterns are first extracted from the input by some (invisible,
unacknowledged) preprocessor, but when the nature of this preprocessor
is examined carefully, it turns out that its job is far, far more
intelligent than the supposed association engine to which it delivers
its goods?
To be sure, this guy Granger may have answers (good, convincing answers
backed up by experiments and simulations) to all of these questions and
problems. In that case, he would be streets ahead of everyone else and
is destined to save the world.
But if you look at his papers, he shows no sign that he is even aware
that these issues exist. For every 1,000 words of neuroscience, there
are two sentences of cognitive function assertions. And they are just
that: assertions. If this kind of stuff was submitted as a student
essay in a Cognitive Psychology course, it would come back with "WHY???"
written next to each of the cognitive function statements.
If he had actually built a complete simulation of his theory, and if
that simulation actually took raw input, discovered hierarchies of
concepts, handled multiple instances without missing a beat, finessed
all the other issues, and did all of this without inserting a
preprocessor that cheated by getting the programmer to do do all the
important work, I'd be the first to eat my words.
But he hasn't. And neither has Stephen Grossberg. And neither has John
Taylor. And neither has Christof Koch.
If you want to read a thorough analysis of several other examples of
this kind of spurious neuroscience, let me know and I will happily send
a pre-release copy of a paper I recently finished:
Loosemore, R.P.W. & Harley, T.A. "Brains and Minds: On the Usefulness
of Localisation Data to Cognitive Psychology". To appear in M.Bunzl &
S.J.Hanson (Eds.), Philosophical Foundations of fMRI. Cambridge, MA: MIT
Press.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56091708-2422e6