On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Philip Goetz gave an example of an intrusion detection system that learned
> information that was not comprehensible to humans.  You argued that he
> could
> have understood it if he tried harder.

    No, I gave five separate alternatives most of which put the blame on the
system for not being able to compress it's data pattern into knowledge and
explain it to Philip.

But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!

Consider the case of mathematical proof.  Given a tricky theorem to
prove, I can show students the correct approach.  But my knowledge of
**why** I take the strategy I do, is a lot tougher to communicate.
Most of advanced math education is about "learning by example" -- you
show the student a bunch of proofs and hope they pick up the spirit of
"how to prove" stuff in various domains.  Explicitly articulating and
explaining knowledge about "how to prove" is hard...

The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it....

OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...

So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI .... but I don't agree that it
can't serve as part of an AGI.

However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a "machine learning
style" procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to