> I don't know what you mean by incrementally updateable,
> but if you look up the literature on language learning, you will find
> that learning various sorts of relatively simple grammars from
> examples, or even if memory serves examples and queries, is NP-hard.
> Try looking for Dana Angluin's papers back in the 80's.

No, a thousand times no.  (Oh, why do we have to fight the same battles
over and over again?)

These proofs depend on assumptions about what "learning" is, and those
assumptions involve a type of learning that is stupider than stupid.

I don't think the proofs depend on any special assumptions about the
nature of learning.

Rather, the points to be noted are:

1) these are theorems about the learning of general grammars in a
certain class, as n (some measure of grammar size) goes to infinity

2) NP-hard is about worst-case time complexity of learning grammars in
that class, of size n

So the reason these results are not cognitively interesting is:

1) real language learning is about learning specific grammars of
finite size, not parametrized classes of grammars as n goes to
infinity

2) even if you want to talk about learning over parametrized classes,
real learning is about average-case rather than worst-case complexity,
anyway (where the average is over some appropriate probability
distribution)

-- Ben G


Any learning mechanism that had the ability to do modest analogy
building across domains, and which had the benefit of primitives
involving concepts like "on", "in", "through", "manipulate", "during",
"before" (etc etc) would probably be able to do the grammer learning,
and in any case, the proofs are completely incapable of representing the
capabilities of such learning mechanisms.

Such ideas have been (to coin a phrase) debunked every which way from
sunday. ;-)


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to