Ben,


A very good post.



It is a more valuably-detailed statement of what a guy who worked at
Thinking Machines told me in the late eighties -- that it had become
pretty obvious you can’t have good NL understanding unless you can reason
from world knowledge.



Another argument for breaking the small machine mindset.



Ed Porter




-----Original Message-----
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
Sent: Friday, November 02, 2007 6:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] NLP + reasoning?



Linas,




Yes, and in the first email I wrote, that started this thread, I stated,
more or less: "yes, I am aware that many have tried, and that its a
swamp, and can anyone elucidate why?"  And, so far, no one as been able
to answer that question, even as they firmly assert that surely it is a
swamp. Nor has anyone attempted to posit any mechanisms that avoid that
swamp, other than thought bubbles that state things like "starting from
a clean slate, my system will be magic".



I think the main problems you'll find with this kind of system are as
follows.

1)
The difficulties with the NLP part..

What's hard about getting an NLP parser tuned well enough to output a
whole bunch
of complex logical relationships into a knowledge base, based on
interpreting
English sentences?

-- Reference resolution
-- Semantic disambiguation, esp. of words besides nouns and verbs
-- Preposition disambiguation

State of the art NLP tech handles these things only in a pretty limited
way.

2)
The difficulties with the reasoning part..

What's hard about tuning a logical reasoning engine to carry out effective

reasoning based on a large body of relationships extracted from natural
language?

-- Propagating uncertainty usefully thru logical reasoning steps (I think
we've solved this once with PLN, though)
-- Inference control!  ... choosing what inferences to do, which is only
a problem when you have a very large KB
-- Contextual understanding.  Most propositions parsed out of text will
have validity only in a certain context, but the context is left implicit
and
needs to be inferred.

..

The issue I see is that tasks like
-- inference control
-- contextual interpretation of propositions
-- preposition disambiguation
-- nominal reference resolution

are not just technical problems, they're problems that may well be
"AGI-hard problems" in themselves, in that it may be they can only
be solved by a software program that somehow embodies a fairly thorough
understanding of the world to which the NLP-derived propositions pertain.

The deceptiveness of the NLP+logic approach to AGI is that these big
issues
are made to seem like small ones, because they get placed on long lists
of issues alongside other issues that aren't so profound...

I think that many people who have started down the path you're on, have
realized this fact, and have wound up spending their research careers
working on one of the AGI-hard subsidiary problems I've mentioned,
or other similar ones ;-)

As for how to avoid these problems, I have already stated my approach:
couple the sort of stuff you're doing with embodiment, in physical or
virtual worlds.  That is how humans get around these problems.  Exactly
how embodied experience helps with these problems is of course a long
story.
In the case of humans, you can consult a huge body of developmental
psychology literature.  In the case of AGI systems, it depends on your AGI
approach; we have thought this through pretty carefully in the context of
Novamente...

-- Ben G

  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60648543-fe6ef9

Reply via email to