I wish I had ever written enough glue around ConceptNet to contribute
something to this discussion because apparently some or all of Cyc's
KB is in there. Its natural-language faculties are pretty extensive,
chopping up English speech into entities, adjectives, prepositions,
subject-event pairs and verb-subject-object-object sets. I've been
associating sets of these with phrases they were extracted from to be
able to perform rapid salience-ranking when given a set of parts of
speech from a text to which, for instance, my bot wants to try to
respond. This has allowed some interesting insights... for example,
apparently there are four and a half times as many unique entities as
adjectives in conversational English, a ratio I would never thought
about on my own.

Having never gone into ConceptNet's common sense or context
functionality I can't say anything about it. But the natural language
features are quite prominent and seem fairly strong. Certainly my bot
has accomplished some linguistic transforms that seemed like magic to
me even if no-one on IRC was impressed. ;(

It uses something called MontyLingua. Does anyone know anything about
this? There's a site at http://web.media.mit.edu/~hugo/montylingua/
and it is for Python.

I set out with the idea that strong NLP today was mostly a matter of
tying together existing technologies with the right kind of string,
although I didn't expect to make intelligence. But intelligence could
be a prerequisite for NLP...


On 9/29/08, David Hart <[EMAIL PROTECTED]> wrote:
> On Tue, Sep 30, 2008 at 5:23 AM, Mike Tintner
> <[EMAIL PROTECTED]>wrote:
>
>>
>> How does Stephen or YKY or anyone else propose to "read between the
>> lines"?
>> And what are the basic "world models", "scripts", "frames" etc etc. that
>> you
>> think sufficient to apply in understanding any set of texts, even a
>> relatively specialised set?
>>
>> (Has anyone seriously *tried* understanding passages?)
>>
>
> That's a most thoughtful and germane question! The short answer is no, we're
> not ready yet to even *try* to tackle understanding passages. Reaching that
> goal is definitely on the roadmap though, and there's a concrete plan to get
> there involving learning through vast and varied activities experienced over
> the course of many years of practically continious residence in numerous
> virtual worlds. The plan indeed includes the continuous creation, variation
> and development of mental world-models within an OCP-based mind. Attention
> allocation and many other mind dynamics
> (CIMDynamics<http://opencog.org/wiki/Special:Search?search=CIMDynamics>)
> crucial to this world-modeling faculty must be adequately developed, tested
> and tuned as a pre-requisite to begin trying to understand passages (and,
> also to generate and communicate imagined world-models as a human story
> teller would do; a curious byproduct of an intelligent system that can
> reason about potential events and scenarios!)
>
> NB: help is needed on the OpenCog wiki to better document many of the
> concepts discussed here and elsewhere, e.g. *Concretely-Implemented Mind
> Dynamics* (CIMDynamics) requires a MindOntology page explaining it
> conceptually, in addtion to the existing nuts-and-bolts entry in the
> OpenCogPrime section.
>
> -dave
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to