YKY (Yan King Yin) wrote:
On 3/12/07, Ben Goertzel <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
wrote:
> "Natural concepts" in the mind are ones for which inductively learned
> feature-combination-based classifiers and logical classifiers give
> roughly the same answers...
1. The feature-combination-based classifiers CAN be encoded in the
probabilistic logical form.
Of course -- just as the Fundamental Theorem of Calculus and the Riesz
Representation Theorem
can be encoded in formal logic (see mizar.org).
But that doesn't mean this is the most useful representation for
practical cognition...
2. Inductive learning CAN be performed on such representations.
Yeah, i Know but it's a very inefficient way to do supervised
categorization, in practice.
So it is possible to have a *unified* representation.
Yes, of course it is. This was first convincingly shown by Russell and
Whitehead, I suppose, way
back when....
The question is whether this is PRAGMATICALLY USEFUL, not whether it is
possible...
> > 3. Give an example of a task where logical inference is
inefficient? ;)
>
> Recognizing and classifying visual objects.
>
> Learning complex motor procedures, such as serving a tennis ball ... or
> walking with complex legs like human ones, for that matter...
>
> Assignment of credit: figuring out which knowledge-items and processes
> within a mind were responsible for which achievements, to what extent...
>
> Supervised categorization, in general. (Hence logical reasoning does
> not feature prominently in the vast literature on sup. cat., machine
> learning, etc.)
1. Recognizing and classifying visual objects -- I have thought
extensively about how to do this in the logical setting (though the
lowest-level representation is neural). I don't think why your
approach ("tiny programs") would lead to a speed-up when the number of
possible objects becomes very large -- we face the same problems
there. The /rete/ algorithm can handle that efficiently.
2. Learning complex procedures -- I'm not very good at this area, but I
think procedures can be represented in logic too.
YES -- anything can be represented in logic. The question is whether
this is a useful representational style, in the sense that it matches
up with effective learning algorithms!!! In some domains it is, in
others not.
3. Assignment of credit -- perhaps can be dealt with in the
truth-maintenance system, which keeps track of inference chains.
4. Supervised categorization -- I know that inductive logic learning
is not very popular here, but it would allow us to use a *unified*
representation, which is the key thing here.
Notice that if we have a unified representation, we can later work out
more efficient algorithms within that representation, eg heuristics.
Yeah, and what you will find is that these "more efficient algorithms"
are more efficient only if you let them work with non-logical
knowledge representations ;-p
In NM, we do in fact have a unified logical representation -- but we
also have a number of other more specialized representations, that apply
in various contexts, and that seem to be necessary to enable passably
efficient processing...
All you are offering are promises that you think you can figure out how
to do all cognitive processing efficiently using logic -- but people
have tried this for many decades without success. Dozens of brilliant
researchers have spent their careers specifically working on assignment
of credit using logic-related methods. My question is what unique
insight do you have that can solve this problem where so many others
have failed?
-- Ben G
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303