YKY wrote:
> I think Cyc failed mainly because their KB is not large enough to make
> useful inferences.  We need a huge KB indeed.

But if so, then doesn't this suggest the whole approach (hand-encoding
of knowledge-items) is a bad path...

Cyc's KB is already big and so many man-years were put into building it...

If we need a KB orders of magnitude larger to make that approach work,
doesn't that mean we should use another approach?

Like, er, embodied learning or NL information extraction / conversation ...
which have the potential to allow rules to be learned implicitly from
experience rather than explicitly via human hard-coding...

I don't understand why, if you think Cyc's KB is too small but their
knowledge representation is basically right, you would want
to start a Cyc-like project without billion-dollar funding.

Do you really think you're going to get random folks online to
enter lots of knowledge accurately in a logical formalism?

It takes a lot of training to accurately enter knowledge in this way.

The small percentage of the population who has this training, has
better things to do than spend large amounts of time formally
encoding knowledge... and generally these people are busy with
paying jobs...

It just doesn't seem a pragmatically feasible approach, setting aside
all my doubts about the AI viability of it (i.e., I'm not so sure that even
if you spent a billion dollars on hand-coding of rules, this would be
all that helpful for AGI, in the absence of a learning engine radically
different in nature from typical logical reasoning engines...)

--  Ben G

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to