Early stage vision involves the detection of primitive types of geometry -
edges, lines of different orientation, blobs, corners, colours and motion in
different directions.  These seem to arise from simple self-organisation due
to the physical properties of neurons and architecture of receptive fields
as they interact with any normal range of sensory input during the early
stages of life.  Local neural connectivity at the early stages of vision
appears to form into quasicrystalline patterns, although whether this
applies more generally throughout the cortex is unknown.  These early stage
geometric primitives are likely to be the vocabulary upon which all visual
recognition is constructed.



On 13/03/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:

On 3/13/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:
> When your AGI sees "A" for the first time(s) in Helvetica and learns
> rules to recognize Helvetica A, then it only has rules for Helvetica
> A. As opposed to having rules for A in general and rules for A in
> Helvetica.
>
> When Times Roman Italic A comes along, how would this AGI recognize it
> using its Helvetica A rules?


That's an interesting point.  I think the AGI learns its first A by
extracting the most "salient" features (saliency is something I have yet to
study on).  When the AGI sees the second A, in a different font, the
matching would be less than 100%, but *probabilistic* matching would still
give a positive result.  That's why I stress that the logic has to be
probabilistic/fuzzy/uncertain.

> I'm not saying your rules approach could not work. Only that I don't
> see how it does. In my lack understanding, I'm assuming we want an AGI
> that can learn on it's own--no explicit rule coding should be required
> for these types of feats. Also, you mention a variety of As as the
> initial training set, but I believe a human could train on nothing but
> Helvetica and still recognize Times Roman Italic afterwards (with the
> expected slowdown).

The automatic learning of rules would be done by a class of algorithm
historically known as "inductive logic programming (ILP)" -- a fancy term
for "inductive learning of logic rules".  It is unfortunate that rather few
people know of this area, though it is actually one of the earliest forms of
machine learning, earlier than backprop etc.  Many AI textbooks have a
chapter on logic-based learning, but it is drowned out by newer, numerical
learning methods.  (Though there is much on-going research in logic-based
learning.)

The "one-shot" learning problem is kind of hard;  I guess it requires a
saliency algorithm.

> I would think the purpose of GA-based algorithms would be to explore
> potential computations either for increasing efficiency or recognizing
> previously unrecognized patterns. Ben and others can refine and
> correct my statement, if they like.

I think GA/EA can *speed up* certain learning processes, including logic
learning.  The problem with Ben's current approach is that he mixes GA/EA
into the basic system.  What I advocate is that there should be a
basic representation that is logic-based and rule-based, and GA/EA may
be used to speed up that system *later*.

YKY
------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to