Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:

>> > I don't know what you mean by incrementally updateable, > but if
>> you look up the literature on language learning, you will find >
>> that learning various sorts of relatively simple grammars from >
>> examples, or even if memory serves examples and queries, is
>> NP-hard.  > Try looking for Dana Angluin's papers back in the 80's.
>> 
>> No, a thousand times no.  (Oh, why do we have to fight the same
>> battles over and over again?)
>> 
>> These proofs depend on assumptions about what "learning" is, and
>> those assumptions involve a type of learning that is stupider than
>> stupid.

Ben> I don't think the proofs depend on any special assumptions about
Ben> the nature of learning.

Ben> Rather, the points to be noted are:

Ben> 1) these are theorems about the learning of general grammars in a
Ben> certain class, as n (some measure of grammar size) goes to
Ben> infinity

Ben> 2) NP-hard is about worst-case time complexity of learning
Ben> grammars in that class, of size n

These comments are of course true of any NP-hardness result.
They are reasons why the NP-hardness result does not *prove* (even
if P!=NP) that the problem is insuperable.

However, the way to bet is generally that the problem is actually
hard. Ch. 11 of WIT? gives some arguments why.

If you don't believe that, you shouldn't rely on encryption.
Encryption has all the above weaknesses in spades, and plus,
its not even proved secure given P!=NP, that requires additional
assumptions.

Also, in addition to the hardness results, there has been considerable
effort in modelling natural grammars by linguists, which has failed,
thus also providing evidence the problem is hard.


Ben> So the reason these results are not cognitively interesting is:

Ben> 1) real language learning is about learning specific grammars of
Ben> finite size, not parametrized classes of grammars as n goes to
Ben> infinity

Ben> 2) even if you want to talk about learning over parametrized
Ben> classes, real learning is about average-case rather than
Ben> worst-case complexity, anyway (where the average is over some
Ben> appropriate probability distribution)

Ben> -- Ben G


>> Any learning mechanism that had the ability to do modest analogy
>> building across domains, and which had the benefit of primitives
>> involving concepts like "on", "in", "through", "manipulate",
>> "during", "before" (etc etc) would probably be able to do the
>> grammer learning, and in any case, the proofs are completely
>> incapable of representing the capabilities of such learning
>> mechanisms.
>> 
>> Such ideas have been (to coin a phrase) debunked every which way
>> from sunday. ;-)
>> 
>> 
>> Richard Loosemore

Ben> ----- This list is sponsored by AGIRI: http://www.agiri.org/email
Ben> To unsubscribe or change your options, please go to:
Ben> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to