Richard> Eric Baum wrote:
>>>>> I don't think the proofs depend on any special assumptions about
>>>> the > nature of learning.
>>>> 
>>>> I beg to differ.  IIRC the sense of "learning" they require is
>>>> induction over example sentences.  They exclude the use of real
>>>> world knowledge, in spite of the fact that such knowledge (or at
>>>> least <primitives involved in the development of real world
knowledge> ) are posited to play a significant role in the learning
>>>> of grammar in humans.  As such, these proofs say nothing
>>>> whatsoever about the learning of NL grammars.
>>>> 
>> I fully agree the proofs don't take into account such stuff.  And I
>> believe such stuff is critical. Thus I've never claimed language
>> learning was proved hard, I've just suggested evolution could have
>> encrypted it.
>> 
>> The point I began with was, if there are lots of different locally
>> optimal codings for thought, it may be hard to figure out which one
>> is programamed into the mind, and thus language learning could be a
>> hard additional problem to producing an AGI. The AGI has to
>> understand what the word "foobar" means, and thus it has to have
>> (or build) a code module meaning ``foobar" that it can invoke with
>> this word. If it has a different set of modules, it might be sunk
>> in communication.
>> 
>> My sense about grammars for natural language, is that there are
>> lots of different equally valid grammars that could be used to
>> communicate.  For example, there are the grammars of English and of
>> Swahili. One isn't better than the other. And there is a wide
>> variety of other kinds of grammars that might be just as good, that
>> aren't even used in natural language, because evolution chose one
>> convention at random.  Figuring out what that convention is, is
>> hard, at least Linguists have tried hard to do it and failed.  And
>> this grammar stuff is pretty much on top of, the meanings of the
>> words. It serves to disambiguate, for example for error correction
>> in understanding. But you could communicate pretty well in pidgin,
>> without it, so long as you understand the meanings of the words.
>> 
>> The grammar learning results (as well as the experience of
>> linguists, who've tried very hard to build a model for natural
>> grammar) I think, are indicative that this problem is hard, and it
>> seems that this problem is superimposed above the real world
>> knowledge aspect.

Richard> Eric,

Richard> Thankyou, I think you have focussed down on the exact nature
Richard> of the claim.

Richard> My reply could start from a couple of different places in
Richard> your above text (all equivalent), but the one that brings out
Richard> the point best is this:

>> And there is a wide variety of other kinds of grammars that might
>> be just as good, that aren't even used in natural language, because
>> evolution chose one convention at random.
Richard>
Richard> ^^^^^^

Richard> This is precisely where I think the flase assumption is
Richard> buried.  When I say that grammar learning can be dependent on
Richard> real world knowledge, I mean specifically that there are
Richard> certain conceptual primitives involved in the basic design of
Richard> a concept-learning system.  We all share these primitives,
Richard> and [my claim is that] our language learning mechanisms start
Richard> from those things.  Because both I and a native Swahili
Richard> speaker use languages whose grammars are founded on common
Richard> conceptual primitives, our grammars are more alike than we
Richard> imagine.

Richard> Not only that, but if myself and the Swahili speaker suddenly
Richard> met and tried to discover each other's languages, we would be
Richard> able to do so, eventually, because our conceptual primitives
Richard> are the same and our learning mechanisms are so similar.

Richard> Finally, I would argue that most cognitive systems, if they
Richard> are to be successful in negotiating this same 3-D universe,
Richard> would do best to have much the same conceptual primitives
Richard> that we do.  This is much harder to argue, but it could be
Richard> done.

Richard> As a result of this, evolution would not by any means have
Richard> been making random choices of languages to implement.  It
Richard> remains to be seen just how constrained the choices are, but
Richard> there is at least a prima facie case to be made (the one I
Richard> just sketched) that evolution was extremely constrained in
Richard> her choices.

Richard> In the face of these ideas, your argument that evolution
Richard> essentially made a random choice from a quasi-infinite space
Richard> of possibilities needs a great deal more to back it up.  The
Richard> grammar-from-conceptual-primitives idea is so plausible that
Richard> the burden is on you to give a powerful reason for rejecting
Richard> it.

Richard> Correct me if I am wrong, but I see no argument from you on
Richard> this specific point (maybe there is one in your book .... but
Richard> in that case, why say without qualification, as if it was
Richard> obvious, that evolution made a random selection?).

Richard> Unless you can destroy the grammar-from-conceptual-primitives
Richard> idea, surely these arguments about hardness of learning have
Richard> to be rejected?


The argument, in very brief, is the following. Evolution found a
very compact program that does the right thing. (This is my
hypothesis, not claimed proved but lots of reasons to believe it
given in WIT?.) Finding such programs is NP-hard. The same arguments
indicate, you don't need to find the global optimum, shortest best
program, for it to work, and there's no reason to believe evolution
did. You just need to find a sufficiently good one (which is still
typically NP-hard.)

Lots of experience with analogous such problems (and various
theoretical arguments) shows that there usually are lots 
(in fact, exponentially many) of locally 
optimal solutions that don't look like each other in detail.
For example, consider domain structure in crystals. That's a case
where there is a single global optimum-- but you don't actually
find it. If you do it twice, you will find different domain
structures. Cases such as spin glasses, are likely to be even worse.
 
Evolution picked one conceptual structure, but there are likely
to be many that are just as good. Communication, however, may
well depend on having a very similar conceptual structure.

Also, in addition to getting the conceptual structure right,
I expect that grammar involves lots of other choices that are
essentially just notational choices, purely arbitrary, on top
of the actual computational modules, only concerned with
parsing communication streams between different individuals. 
Yes English speakers and Swahili's have all these other choices 
in common, because they are essentially evolved into the genome.
But that does not mean that these choices are in any way determined,
even assuming you get the conceptual structure the same.
This stuff could be purely notational.
Its this stuff that the hardness of grammar learning results 
pertains most too, this is what Chomsky/Pinker mean when they talk 
about inborn language instinct, all this literature does ignore
semantics, but that's because (at least in large part) this literature
believes there's a notational ambiguity. Since clearly there could
be such a notational ambiguity, to believe there isn't, you have
to posit a reason why it wouldn't arise. Evolution just short circuits
this by choosing a notation, but figuring out what notation can be
a hard problem, since determining a grammar from examples
is hard.


Richard> Richard Loosemore

Richard> ----- This list is sponsored by AGIRI:
Richard> http://www.agiri.org/email To unsubscribe or change your
Richard> options, please go to:
Richard> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to