First order logic (FOL) is good for expressing simple facts like "all birds 
have wings" or "no bird has hair", but not for statements like "most birds can 
fly".  To do that you have to at least extend it with fuzzy logic (probability 
and confidence).

A second problem is, how do you ground the terms?  If you have "for all X, 
bird(X) => has(X, wings)", where does "bird", "wings", "has" get their 
meanings?  The terms do not map 1-1 to English words, even though we may use 
the same notation.  For example, you can talk about the wings of a building, or 
the idiom "wing it".  Most words in the dictionary list several definitions 
that depend on context.  Also, words gradually change their meaning over time.

I think FOL represents complex ideas poorly.  Try translating what you just 
wrote into FOL and you will see what I mean.
 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Philip Goetz <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, November 28, 2006 5:45:51 PM
Subject: Re: [agi] Understanding Natural Language

Oops, Matt actually is making a different objection than Josh.

> Now it seems to me that you need to understand sentences before you can 
> translate them into FOL, not the other way around. Before you can translate 
> to FOL you have to parse the sentence, and before you can parse it you have 
> to understand it, e.g.
>
> I ate pizza with pepperoni.
> I ate pizza with a fork.
>
> Using my definition of understanding, you have to recognize that "ate with a 
> fork" and "pizza with pepperoni" rank higher than "ate with pepperoni" and 
> "pizza with a fork".  A parser needs to know millions of rules like this.

Yes, this is true.  When I said "neatly", I didn't mean "easily".  I
mean that the correct representation in predicate logic is very
similar to the English, and doesn't lose much meaning.  It was
misleading of me to say that it's a good starting point, though, since
you do have to do a lot to get those predicates.

A predicate representation can be very useful.  This doesn't mean that
you have to represent all of the predications that could be extracted
from a sentence.  The NLP system I'm working on does not, in fact, use
a parse tree, for essentially the reasons Matt just gave.  It doesn't
want to make commitments about grammatical structure, so instead it
just groups things into phrases, without deciding what the
dependencies are between those phrases, and then has a bunch of
different demons that scan those phrases looking for particular
predications.  As you find predications in the text, you can eliminate
certain choices of lexical or semantic category for words, and
eliminate arguments so that they can't be re-used in other
predications.  You never actually find the correct parse in our
system, but you could if you wanted to.  It's just that, we've already
extracted the meaning that we're interested in by the time we have
enough information to get the right parse, so the parse tree isn't of
much use.  We get the predicates that we're interested in, for the
purposes at hand.  We might never have to figure out whether pepperoni
is a part or an instrument, because we don't care.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to