Ed Porter said:
You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this. 
 
---------------------------------
I never implied that I have been able to accomplish a somewhat similar implicit 
representation of bindings in a much higher dimension and presumably large 
semantic space.

I clearly stated:

"I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas."
-and,
"The complex groupings of
objects that I have in mind would have been derived using different
methods of analysis and combination and when a group of them is called
from an input analysis their use should tend to narrow the objects that
might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar
to what Riesenhuber and Poggio were suggesting that their methods would
be capable of. So, yes,I think some similar methods can be used in NLP."

I clearly used the expression "in mind" just to avoid the kind of  
misunderstanding that you made. I never made the exaggerated "claim" that I had 
accomplished it.


The difference between having an idea "in mind" and having "claimed to have 
accomplished" a goal, which the majority of participants in the group would 
acknowledge is elusive, should be obvious and easy to understand.


I am not claiming that I have a method that would work in all semantic space.  
I would be happy to claim that I do have a theory which I believe should show 
some limited extensibility in semantic space that goes beyond other current 
theories.  However, I will not know for sure until I test it and right now that 
looks like it would be years off.


I
would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during
the past week.

Jim Bromer




Jim,
 
In
the Riesenhuber and Poggio paper the binding that were handled
implicitly involved spatial relationships, such as an observed roughly
horizontal line substantially touching an observed roughly vertical
line at their respective ends, even though their might be other
horizontal and vertical lines not having this relationship in the input
pixel space.  It achieves such implicit bindings by having enough
separate models to be able to detect, by direct mapping, such a
touching relationship between a horizontal and vertical lines at each
of many different locations in the visual input space.
 
But
the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23
million models.  You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this.
 
Could you please clarify you description with regard to this point.
 
Ed Porter
 
-----Original Message-----
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE 
BINDING PROBLEM"?
 
I
started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that
Risesnhuber and Poggio argued that "the binding problem seems to be a
problem for only some models of object recognition."  In other words,
it seems that they are claiming that the problem disappears with their
model of neural cognition! 

The study of feature detectors in
cats eyes is old news and I did incorporate that information into the
development of my own theories.

I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas.  In my model an input would be
scanned for different features using different kinds of analysis on the
input.  So then a configuration of simple features would be derived
from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the
features.  Because the complex groups of objects are complexes (in the
general sense), and would be learned by previous experience, they are
not insipidly modeled on one standard model. These complex objects are
complex in that they are not all cut from one standard.  The older
implementations that used operations that were taken from set theory on
groups were set on object models that were very old-world and were not
derived from learning.  For example they were non-experiential. (I
cannot remember the term that I am looking for but experiential is the
anthropomorphic term).  All of the groupings in old models that looked
for intersections were of a few predefined kinds, and most
significantly they did not recognize that ideologically incommensurable
references could affect meaning (or effect) even if the references were
strongly associated and functionally related.  The complex groupings of
objects that I have in mind would have been derived using different
methods of analysis and combination and when a group of them is called
from an input analysis their use should tend to narrow the objects that
might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar
to what Riesenhuber and Poggio were suggesting that their methods would
be capable of. So, yes,I think some similar methods can be used in NLP.

However, my model also includes the recognition that comparing apples
and oranges is not always straight forward.  This gives you an idea of
what I mean by ideologically incommensurable associations. If I were to
give some examples, a reasonable person might simply assume that the
problems illustrated by the examples could easily be resolved with more
information, and that is true.  But the point that I am making is that
this view of ideologically incommensurable references can be helpful in
the analysis of the kinds of problems that can be expected from more
ambitious AI models.

Jim Bromer


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to