--- "Dr. Matthias Heger" <[EMAIL PROTECTED]> wrote:

> >>>>>>>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
> 
> Actually that's only true in artificial languages.  Children learn
> words with semantic content like "ball" and "milk" before they learn
> function words like "the" and "of", in spite of their higher
> frequency.
> 
> <<<<<<<<<<<<
> 
> Before they learn the words and their meanings they have to learn to
> recognize the sounds for the words. And even if they use words like
> "with" "of" and "the" later they must be able to separate these
> function-words and
> relation-words from object-words before they learn any word.
> But separating words means classifying words and that means knowledge
> of grammar for a certain degree.

Lexical segmentation is learned before semantics, but other grammar is
learned afterwards.  Babies learn to segment continuous speech into
words at 7-10 months [1].  This is before they learn their first word,
but is detectable because babies will turn their heads in preference to
segmentable speech.

It is also possible to guess word divisions in text without spaces
given only a statistical knowledge of letter n-grams [2].

Natural language has a structure that makes it easy to learn
incrementally from examples with a sufficiently powerful neural
network.  It must, because any unlearnable features will disappear.


> >>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
> Techniques for parsing artificial languages fail for natural
> languages
> because the parse depends on the meanings of the words, as in the
> following example:
> 
> - I ate pizza with pepperoni.
> - I ate pizza with a fork.
> - I ate pizza with a friend.
> <<<<<<<<<<<<<<<<<<<<
> 
> In days of early AI the O-O paradigm was not so sophisticated as it
> is
> today. The  phenomenon of your example is well-known in O-O paradigm
> and is modeled by overwritten functions which means that
> Objects may have several functions with the same name but with
> different signatures.
> 
> eat(Food f)
> eat(Food f, List<SideDish> l)
> eat (Food f, List<Tool> l)
> eat (Food f, List<People> l)
> ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.

> I think, it is clear that there are representations like classes,
> objects, relation between objects, attributes of objects.
> 
> But the crucial questions are:
> How did we and do we build our O-O models?
> How created the brain abstract concepts like "ball" and "milk"?
> How do we find classes, objects and relations?

We need to understand how children learn grammar without any concept of
what a noun or a verb is.  Also, how do people learn hierarchical
relationships before they learn what a hierarchy is?

1. Jusczyk, Peter W. (1996), "Investigations of the word segmentation
abilities of infants", 4'th Intl. Conf. on Speech and Language
Processing, Vol. 3, 1561-1564.

2. http://cs.fit.edu/~mmahoney/dissertation/lex1.html


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to