If this doesn't seem to be the case, this is because of that some concepts are
so abstract that they don't seem to be tied to perception anymore. It is
obvious that they are (directly) tied to more concrete concepts (be
defined/described in terms of ...), but those concepts can also still be very
abstract. And so abstract concepts can seem to only depend on other abstract
concepts, and together lead their own life, not tied to/determined by
perception/sensation. However, if you would/could trace all the dependencies
of any concept you would end up on the perception level.
Hmmm... well, although I learned mathematics via perceiving books and
spoken words and so forth, I really don't think this perspective is a
very useful one for understanding how my mind proves theorems.
I'm aware of Lakoff and Nunez's arguments that abstract math is
grounded in concrete physical perceptions and actions, and I agree
this is true to an extent -- but I also think they overstate the case
a bit. Cognition originated in perception and action but also has
some unique properties.
Visual perception is particularly hierarchical, and I think Hawkins
has modeled it interestingly, but I don't fully buy the visual cortex
as a paradigm case for the structure and dynamics of cortex in
general...
As for the "prediction" paradigm, it is true that any aspect of mental
activity can be modeled as a prediction problem, but it doesn't follow
that this is always the most useful perspective. And different kinds
of prediction may be very different in terms of the underlying
structures and dynamics they require. Predicting which action
sequence is likely to yield a goal is quite different than predicting
which percept is likely to appear next as the eye moves.... The
structures and dynamics useful for visual prediction are not
necessarily going to be useful for predicting action consequences,
theorem-proving trajectories, etc.
So, Hawkins "memory prediction framework" if taken generally is hard
to argue against, but also is just a restatement of a lot of familiar
ideas. On the other hand his specific implementation of the memory
prediction framework is very visual-cortex-bound, IMO
-- Ben G
On 6/15/06, arnoud <[EMAIL PROTECTED]> wrote:
Hi,
I think/suspect that Hawkins' theory is that every (useful) concept, no matter
how abstract, is rooted in spatiotemporal pattern recognition, and that
therefor there is no real distinction between spatiotemporal pattern
recognition and cognition.
In his theory every concept is a prediction that the next sequence of sense
data input will be in a certain class/category. And what that class is, is
formulated as a sequence of one step lower level concepts (concept
activations) or a family of similar sequences of one step lower level
concepts.
For me thinking about concepts (abstract knowledge) in a pure causal way,
judging the usefulness of a concept purely on basis of its predictive power
('If this concept is applicable in this situation, what does it tell me about
what will happen next') helps me to get ideas about how automatic concept
formation could take place. Otherwise, I'm completely in the dark.
But is it a good move to think about cognition as only prediction? Or is there
more to cognition? All the problems that we have to solve have the form: I
want to achieve situation S1 and I'm now in situation S0, will action/plan A
get me from S0 to S1? On all time scales and levels of abstraction this is
the form of the problems we have to solve. Concepts that do not help in
solving those problems are not knowledge.
From the general form of problems you can see that all we want from concepts
are predictions (that certain actions in certain contexts will lead to
certain results). Also very abstract concepts still have to give predictive
power (probably on large time scales. On short time scales we want/need
detailed predictions and therefor need concrete concepts).
All predictions eventually will have to be verified on the level of
perception. Because, what is it that is predicted? That a (next) sequence of
perceptions (formulated/represented at some level of abstraction) will fall
into a certain category. And therefor it is plausible that all cognition is
spatiotemporal pattern recognition.
If this doesn't seem to be the case, this is because of that some concepts are
so abstract that they don't seem to be tied to perception anymore. It is
obvious that they are (directly) tied to more concrete concepts (be
defined/described in terms of ...), but those concepts can also still be very
abstract. And so abstract concepts can seem to only depend on other abstract
concepts, and together lead their own life, not tied to/determined by
perception/sensation. However, if you would/could trace all the dependencies
of any concept you would end up on the perception level.
Arnoud
On Thursday 15 June 2006 14:20,
Ben Goertzel wrote:
> Hi,
>
> I have read Hawkins' paper carefully and I enjoyed it.
>
> As for the generality of applicability of HTM, here is my opinion..
>
> The specific manifestation of hierarchical pattern recognition that
> Hawkins describes is only applicable to spatiotemporal pattern
> recognition, and involves some unique ideas Hawkins has introduced...
>
> But, of course, the general concept of hierarchical pattern
> recognition is more broadly applicable and Hawkins was far from the
> first to introduce it ...
>
> And, IMO, Hawkins' architecture does not shed much insight on how to
> make hierarchical pattern recognition work in a *cognition* context...
>
> ben
>
> On 6/15/06, Anneke Siemons <[EMAIL PROTECTED]> wrote:
> > > On Jun 14, 2006, at 1:00 PM, arnoud wrote:
> > >> In Hawkins' architecture (HTM) there is an abstraction mechanism,
> > >> that can
> > >> filter out (irrelevant) details from input data and discover the
> > >> invariants
> > >> (=the abstractions) in temporal sequences of input data.
> > >
> > > This is easy and obvious (why it is very common), but filtering on
> > > the front-end is not really a generalizable solution because it
> > > presumes knowledge of the environment it is learning from. An
> > > example of a fully generalizable filter would be an entropy garbage
> > > collector -- ex post facto and domain adaptive by nature. Lossy in
> > > ways not immediately predictable, but that should not be too much of
> > > a problem for a decent representation system.
> >
> > I think there are two points at which you misunderstand the HTM
> > architecture. First is that filtering out of details doesn't only happen
> > at the front end. The HTM is a hierarchical architecture were
> > filtering/abstraction takes place at each node. Second is that what to
> > filter out is not explictely programmed into the system. It has to figure
> > out by itself what are irrelevant details and what are essential patterns
> > (invariants).
> >
> > To learn more about HTM you could read this white paper:
> > http://www.numenta.com/Numenta_HTM_Concepts.pdf
> >
> > Arnoud
> >
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate your
> > subscription, please go to
> > http://v2.listbox.com/member/[EMAIL PROTECTED]
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
> subscription, please go to
> http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]