----- Original Message ----
From: Benjamin Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, January 9, 2008 4:04:58 PM
Subject: Re: [agi] Incremental Fluid Construction Grammar released

 >     And how would a young child or foreigner interpret on the  Washington
> Monument or "shit list"?  Both are physical objects and a book  *could* be
> resting on them.

Sorry, my shit list is purely mental in nature ;-) ... at the moment, I  
maintain
 a task list but not a shit list... maybe I need to get better  organized!!!

> Ben, your question is *very* disingenuous.

Who, **me** ???

>There is a tremendous amount of
> domain/real-world knowledge that is absolutely required to parse your
> sentences.  Do you have any better way of approaching the problem?
>
> I've been putting a lot of thought and work into trying to build and
> maintain precedence of knowledge structures with respect to  disambiguating
> (and overriding incorrect) parsing . . . . and don't believe that  it's going
> to be possible without a severe amount of knwledge . . . .
>
> What do you think?

OK...

Let's assume one is working within the scope of an AI system that
includes an NLP parser,
a logical knowledge representation system, and needs some intelligent  way to 
map
the output of the latter into the former.

Then, in this context, there are three approaches, which may be tried
alone or in combination:

1)
Hand-code rules to map the output of the parser into a much less
ambiguous logical format

2)
Use statistical learning across a huge corpus of text to somehow infer
these rules
[I did not ever flesh out this approach as it seemed implausible, but
I have to recognize
its theoretical possibility]

3)
Use **embodied** learning, so that the system can statistically infer
the rules from the
combination of parse-trees with logical relationships that it observes
to describe
situations it sees
[This is the best approach in principle, but may require years and
years of embodied
interaction for a system to learn.]


Obviously, Cycorp has taken Approach 1, with only modest success.  But
I think part of
the reason they have not been more successful is a combination of a
bad choice of
parser with a bad choice of knowledge representation.  They use a
phrase structure
grammar parser and predicate logic, whereas I believe if one uses a  dependency
grammar parser and term logic, the process becomes a lot easier.  So
far as I can tell,
in texai you are replicating Cyc's choices in this regard (phrase
structure grammar +
predicate logic).

Yes, the Texai implementation of Incremental Fluid Construction Grammar follows 
the phrase structure approach in which leaf lexical constituents are grouped 
into a structure (i.e. construction) hierarchy.  Yet, because it is incremental 
and thus cognitively plausible, it should scale to longer sentences better than 
any non-incremental alternative.   The mapping of form to predicate logic 
(RDF-style) is facilitated both by Fluid Construction Grammar (FCG) and by 
Double R Grammar (DRG).  I am using the production rule engine from FCG, 
enhanced to operate incrementally, and the construction theory from DRG whose 
focus is on referents and the relationships among them.  For quantifier scoping 
I expect to use Minimal Recursion Semantics which should plug into the FCG 
feature structure.

-Steve
 
Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860







      
____________________________________________________________________________________
Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=84215091-e9ef0b

Reply via email to