On 4/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
I would favor "statistical rule induction", and indeed it might make
sense to seed the rule inducer with some hand-crafted rules, to guide
its learning in the right direction.
Sorry, I made a mistake -- I forget things that I have previously thought
through!  "Rule-based" is not exactly my idea.

Actually my ideal system is one that uses induction, abduction, and truth
maintenance.  It allows the AGI to start as a *tabula rasa* baby, and then
learn language incrementally.  So, web-surfing lay people can teach it.  The
only problem is that the algorithms involved are very complex...

It's misleading to call my approach rule-based, because that reminds people
of old-time expert systems.  It can be more aptly termed "a
consistency-seeking, learning system built on a logic-based KR".

If you're interested in this approach we may have some collaboration.  I am
currently looking into algorithms for logic-based abduction.

I really doubt that a collaboration of web-surfing NL enthusiasts are
gonna create better rules than the linguistics community has done so
far.

You're right.  It is unreasonable to expect web-surfing lay people to be
able to enter rules directly.  It is only reasonable to expect lay people to
talk with and teach a baby AGI, in NL.

Consider this reasoning:
1. The NL task is ultimately to map NL sentences to the KR scheme.
2. For statistical learning, a labelled corpus is much more preferable to
unlabelled ones.  In other words, you'd want a corpus with NL sentences
translated into KR statements, side-by-side.
3. Lay people cannot master a real KR scheme (eg Cyc) because it's too
complex.  Therefore, they cannot produce *labelled* training examples.
4. A well-funded project may be able to translate a large number of NL
sentences to KR, similar to the Penn Tree Bank for syntax, but that takes
$$$.

It seems that the best bet is to let the AGI learn language like a baby.

Why not just create a Web UI allowing users to enter additional rules
for some existing grammar, such as the Link Grammar (my personal fave)
or XTag (too complex for my taste)?  I really think few people will
gain the needed skill and understanding to contribute.  But we did add
some rules to the Link Grammar for a paid NLP consulting project a few
years back.

That was my first-blush approach, but still impractical.  Very few people
can master a KR scheme as well as the computational grammar rules.

Basic English is not all that unambiguous.  Sentences may be short but
anaphora and prepositions remain.

If you're going to restrict your AI to a special subset of English,
then it can't read free text anyway... all you can do is chat with it.
So why not just chat to it in Lojban which has full expressive power,
barely ambiguous semantics, and totally unambiguous syntax?

Lojban has it strengthes, sure.  But the problem with Lojban is that so few
people can speak it.  We need a big community of "AGI babysitters". =)

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to