I'm not saying that the n-space approach wouldn't work, but I have used that
approach before and faced a problem.  It was because of that problem that I
switched to a logic-based approach.  Maybe you can solve it.

To illustrate it with an example, let's say the AGI can recognize apples,
bananas, tables, chairs, the face of Einstein, etc, in the n-dimensional
feature space.  So, Einstein's face is defined by a hypersurface where each
point is an instance of Einstein's face; and you can get a caricature of
Einstein by going near the fringes of this hypervolume.  So far so good.

Now suppose you want to say: the apple is *on* the table, the banana is *on*
the chair, etc.  In logical form it would be on(table,apple), etc.  There
can be infinitely many such statements.

The problem is that this thing, "on", is not definable in n-space via
operations like AND, OR, NOT, etc.  It seems that "on" is not definable by
*any* hypersurface, so it cannot be learned by classifiers like feedforward
neural networks or SVMs.  You can define "apple on table" in n-space, which
is the set of all configurations of apples on tables; but there is no way to
define "X is on Y" as a hypervolume, and thus to make it learnable.

This problem extends to other predicates besides on(x,y).

YY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to