Josh> The other idea in OI worth noting is Mountcastle's Principle,
Josh> that all of the cortex seems to be doing the same thing. Hawkins
Josh> gets credit for pointing it out, but of course it was a
Josh> published observation of Mountcastle in the first place. My AI
Josh> architecture is influenced by the observation, although it's not
Josh> quite as useful as it might seem. (E.g. it would be satisfied by
Josh> saying each column is a general-purpose processor.)

A year or so ago, Hawkins came to Princeton to speak and I met him.
Shortly thereafter I sent him an email (which he never bothered to
answer) that addresses the above Montcastle observation (as well as
some other stuff.) I'll reproduce it below:

Jeff--

Concept learning has been extensively studied by the
CS community. The gist of the result is: you get
generalization (aka "invariant representations") by
extracting a compact representation, which is NP-hard.
Your picture assumes this NP-hard problem is finessed,
without saying how.

Look at it this way:
exploiting structure is basically about having the right algorithm--
e.g. the right abstraction hierarchy.
You agree the hierarchical network in Visual Cortex (for example,
the Van Essen diagram) is wired in genetically.
This makes sense for a lot of reasons, but one is that
evolution had available vastly more computational
resources than Visual Cortex has in real time, so could do a much
better job of solving hard computational problems.
And there's no reason to believe some fine structure isn't 
programmed in too, things are "uniform" only in a gross way.

You're basically asserting that what's wired in is unimportant,
but this neglects the computational learning theory,
which indicates it is. 
Your main reason for optimism is that cortex has a roughly
uniform architecture, at a certain level of focus. But if 
you set out to build a cognitive supercomputer tomorrow, 
you'd probably build something like a huge MIMD machine
with lots of nodes, and it would have very regular architecture.
It takes a certain size unit to support general computation 
(e.g. a MIMD node), and it makes sense to replicate them,
especially if you want to run radically different algorithms
on problems you don't even know about at hardware design time.
That doesn't mean it would all run "the same algorithm", in fact, 
since it would take lots of computation to figure out what algorithms
it should run, it would rewrite its algorithms as it went, and
not uniformly. It would start with the best algorithms you (or
evolution) could design for this process, (loaded into
memory registers, not visible at the focus at which the 
hardware looked "uniform") which would already be
specialized to the structure of the world, 
and use those to update its algorithms
as it saw more data, possibly in a very complex way.

It's highly natural from both a computational (as above) and an
evolutionary/developmental perspective for cortex to
be uniform at a certain level of focus. 
That's the way development works: it finds gene
circuits to build something useful, then replicates them,
but adds variations. Your arms and your legs are sketched by
the same genetic circuitry that sketches all animal segments, 
but then modified by later genetic tweaks, such as fingers
and toes (themselves iterated segments). Why not the same 
for visual and auditory cortex?
What would prevent the evolution of programmed 
specializations in cortical regions?

The learning theory tells us the computational problems
are basically not soluble in real time, but might be soluble
if the right specializations were provided.
The really hard part of learning is getting started,
e.g. discovering the initial inductive biases, and evolution had been 
working on that problem for many aeons before cortex was even 
invented. If inductive biases were discovered and programmed in, 
you won't find it easy to shortcut.
You said yourself, your approach would be hopeless if you didn't
start with a topographic map. But why should a topographic map
be the only bias built in? In my view, both Computer Science
and Biology predict there will be critical and much more sophisticated
biases built in, that are not so immediately apparent. And
there will be different ones for various aspects of cognition.
 
This doesn't necessarily mean there's nothing to your ideas. As I
read your book, what I saw was a theory of how the
brain operates, not a theory of how it gets there or how it learns.
E.g. your book had no theory I understood of how a column decides
what it is classifying, e.g. what dimensions are used to
compute invariant representations. I think you even acknowledged
this was a problem yet to be solved. Your talk seemed to indicate
you've begun to develop one since the book. I'm skeptical about
that, but as a theory of brain operation: the local structure of
the communications, the nature of the communications in the finished
product etc., there could be something quite interesting to your theory.

It might seem from the above that I think machine cognition is
impossible, since it took the massive computational resources 
of evolution to achieve it. Indeed, that was my original if
reluctant conclusion. But I've been thinking hard on the 
problem, and I'm now hopeful I've found a way around it.

--Eric 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to