On Jan 10, 2008 10:03 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> All this discussion of building a grammar seems to ignore the obvious fact
> that in humans, language learning is a continuous process that does not
> require any explicit encoding of rules.  I think either your model should
> learn this way, or you need to explain why your model would be more successful
> by taking a different route.  Explicit encoding of grammars has a long history
> of failure, so your explanation should be good.  At a minimum, the explanation
> should describe how humans actually learn language and why your method is
> better.

Matt,

If you read the paper at the top of this list

http://www.novamente.net/papers/

you will see a brief summary of the reasoning behind the approach I am
taking.  It is only 8 pages long so it should be quick to read, though
it obviously
does not explain all details in that length.

The abstract is as follows:

*****
Abstract— Current work is described wherein simplified
versions of the Novamente Cognition Engine (NCE) are being
used to control virtual agents in virtual worlds such as game
engines and Second Life.  In this context, an IRC (imitation-
reinforcement-correction) methodology is being used to teach
the agents various behaviors, including simple tricks and
communicative acts.   Here we describe how this work may
potentially be exploited and extended to yield a pathway
toward giving the NCE robust, ultimately human-level natural
language conversation capability.  The  pathway starts via
using the current system to instruct NCE-controlled agents in
semiosis and gestural communication; and then continues via
integration of a particular sort of hybrid rule-based/statistical
NLP system (which is currently partially complete) into the
NCE-based virtual agent system, in such a way as to allow
experiential adaptation of the rules underlying the NLP system,
*****

I do not think that a viable design for an AGI needs to include a description of
human learning (of language or anything else).  No one understands exactly
how the human brain works yet, but that doesn't mean we can't potentially
have success with non-brain-emulating AGI approaches.

My favorite theorists of human language are Richard Hudson (see his 2007
book Language Networks) and Tomassello (see his book Constructing a
Language).  I actually believe my approach to language in AGI is quite
close to their ideas.  But I don't have time/space to justify this statement in
an email.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=84570254-afda8d

Reply via email to