Extracting meaning from text requires context-sensitivity to do
correctly. Natural language parsers necessarily don't reason about
things. An AGI whose natural-language interface was abstracted via
some good parser could make suppositions about the constructs it
returned by interpreting them inside of the contexts that are most
relevant. But, this would only save the cognitive layer the trouble of
converting grammar to symbols and vice-versa. If an AGI could acquire
language naturally I think it would be in a better position to use it
well than if it only ever uses it via an abstracted interface. An
oblique parser/sentence generator mechanism seems like a sensible way
to implement internationalization, however. Although you can also
always do machine translation on the I/O of an AGI that only speaks
English...

I can see a grammar engine as a mechanical form of grounding, like a
robot body, but only barely. I think an AGI so grounded would
eventually seek another way to communicate. At some point its
creativity and ability to communicate ideas would exceed that of its
sentence generator, and its comprehension would be greater than that
of its parser. I'm not sure these belong in AGI design. But, I could
be wrong.


On 9/29/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Eric,
>
> Thanks for link. Flipping through quickly, it still seemed sentence-based.
>
> Here's an example of time flipping - "fast-forwarding" text - and the kind
> of jumps that the mind can make
>
> "AGI Year One. "AGI is one of the great technological challenges. We believe
> we have the basic technology - the basic modules - to meet that challenge."
> AGI Year Five. "We can reach the goal of AGI in 10 years, if we really,
> really try."
> AGI Year Ten.  "It may take longer than we thought, but we can get there..."
> AGI Year Fifteen: "It's proved a much larger problem than we ever
> imagined.."    "
>
> [n.b. I'm not trying to be historically or otherwise accurate :)
>
> But note how your mind had no problem creating a v. complex underlying
> time-jumping scenario to understand - and fill/read between the lines of -
> that text. No current approach has the slightest idea how to do that, I
> suggest.  You can't do it by a surface approach,  simply analysing how words
> are used in however many million verbally related sentences in texts on the
> net.
>
>
>
>> http://video.google.ca/videoplay?docid=-7933698775159827395&ei=Z1rhSJz7CIvw-QHQyNkC&q=nltk&vt=lf
>>
>> NLTK video ;O
>>
>> On 9/29/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
>>> David,
>>>
>>> Thanks for reply. Like so many other things, though, working out how we
>>> understand texts is central to understanding GI - and something to be
>>> done
>>> *now*. I've just started looking at it, but immediately I can see that
>>> what
>>> the mind does - how it jumps around in time and space and POV and
>>> person/subject - and flexibly applies its world/subworld models - is
>>> quite
>>> awesome.
>>>
>>> I think the word/sentence focus BTW is central to cognitive science *and*
>>> the embodied cog. sci. of Lakoff and co.  as well as AI/AGI.
>>>
>>> But the understanding of language understanding will only really come
>>> alive
>>> when we move the focus to passages - and how we use language to construct
>>>
>>> a)
>>> stories b) arguments and c) scenes (descriptive passages).   [I wonder
>>> whether there are any other major categories of language].
>>>
>>> It also entails a switch from just a one-sided embodied POV to a
>>> two-sided
>>> embodied-embedded overview, looking at how language is embedded in the
>>> world.
>>>
>>> To focus on sentences alone is like focussing on the odd frame in a
>>> movie.
>>> You can't get the picture at all.
>>>
>>> A passage/text approach will v. quickly answer Matt's:
>>>
>>> "I mean that a more productive approach would be to try to understand why
>>> the problem is so hard."
>>>
>>>
>>>   David:
>>>
>>>     How does Stephen or YKY or anyone else propose to "read between the
>>> lines"? And what are the basic "world models", "scripts", "frames" etc
>>> etc.
>>> that you think sufficient to apply in understanding any set of texts,
>>> even a
>>> relatively specialised set?
>>>
>>>     (Has anyone seriously *tried* understanding passages?)
>>>
>>>   That's a most thoughtful and germane question! The short answer is no,
>>> we're not ready yet to even *try* to tackle understanding passages.
>>> Reaching
>>> that goal is definitely on the roadmap though, and there's a concrete
>>> plan
>>> to get there involving learning through vast and varied activities
>>> experienced over the course of many years of practically continious
>>> residence in numerous virtual worlds. The plan indeed includes the
>>> continuous creation, variation and development of mental world-models
>>> within
>>> an OCP-based mind. Attention allocation and many other mind dynamics
>>> (CIMDynamics) crucial to this world-modeling faculty must be adequately
>>> developed, tested and tuned as a pre-requisite to begin trying to
>>> understand
>>> passages (and, also to generate and communicate imagined world-models as
>>> a
>>> human story teller would do; a curious byproduct of an intelligent system
>>> that can reason about potential events and scenarios!)
>>>
>>>   NB: help is needed on the OpenCog wiki to better document many of the
>>> concepts discussed here and elsewhere, e.g. Concretely-Implemented Mind
>>> Dynamics (CIMDynamics) requires a MindOntology page explaining it
>>> conceptually, in addtion to the existing nuts-and-bolts entry in the
>>> OpenCogPrime section.
>>>
>>>   -dave
>>>
>>>
>>> ------------------------------------------------------------------------------
>>>         agi | Archives  | Modify Your Subscription
>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to