>>>>>>>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote

Actually that's only true in artificial languages.  Children learn
words with semantic content like "ball" and "milk" before they learn
function words like "the" and "of", in spite of their higher frequency.

<<<<<<<<<<<<

Before they learn the words and their meanings they have to learn to
recognize the sounds for the words. And even if they use words like "with"
"of" and "the" later they must be able to separate these function-words and
relation-words from object-words before they learn any word.
But separating words means classifying words and that means knowledge of
grammar for a certain degree.




>>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
Techniques for parsing artificial languages fail for natural languages
because the parse depends on the meanings of the words, as in the
following example:

- I ate pizza with pepperoni.
- I ate pizza with a fork.
- I ate pizza with a friend.
<<<<<<<<<<<<<<<<<<<<

In days of early AI the O-O paradigm was not so sophisticated as it is
today. The  phenomenon of your example is well-known in O-O paradigm and is
modeled by overwritten functions which means that
Objects may have several functions with the same name but with different
signatures.

eat(Food f)
eat(Food f, List<SideDish> l)
eat (Food f, List<Tool> l)
eat (Food f, List<People> l)
...

Maybe this example is too much simplified but I think it shows that 
the O-O paradigm is powerful enough to model very complex domains.
In fact nearly every software which is developed today uses the O-O paradigm
with great success. And the domains are manifold: From banking processes
over motor control in cars to simulating black holes and the big bang. We
can do it all with O-O based models.



>>>>>>>>>>>>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote

>(Matthias Heger wrote)
 But it is a matter of fact that we use an O-O like model in the
> top-levels of our world. 
> You can see this also from language grammar. Subjects objects,
> predicates, adjectives have their counterparts in the O-O paradigm.

This is the false path of AI that so many have followed.  It seems so
obvious that high level knowledge has a compact representation like
Loves(John, Mary) that is easily represented on a 1960's era computer. 
We can just fill in the low level knowledge later.  This is completely
backwards from the way people learn.  ---
  Learning is from simple to complex, training one layer at a
time.  
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

I do not think, that we should try to write a ready to use- O-O model of the
world for AGI. Instead I think we agree, that there are underlying layers of
models which are not O-O like and which we do not understand now but which
are necessary to understand how our brain creates O-O like models of the
world.


I think, it is clear that there are representations like classes, objects,
relation between objects, attributes of objects.

But the crucial questions are:
How did we and do we build our O-O models?
How created the brain abstract concepts like "ball" and "milk"?
How do we find classes, objects and relations?



>>>>>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED]  wrote
We shy away from a neural approach because a
straightforward brain-sized neural network simulation would require
10^15 bits of memory and 10^16 operations per second.  We note long
term memory has 10^9 bits of complexity [1], so surely we can do
better. But so far we have not, nor have we any explanation why it
takes a million synapses to store one bit of information.

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

I think one reason for the apparent waste of memory is fault tolerance. For
example, when you sneeze 10000 neurons die.
So there is need for a lot of redundancy in the brain.

The second reason is, that the brain is very strong in associations. If some
patterns are active in the brain (or we can also say classes if we use O-O
language) then different patterns (classes) become active and so on. So the
technical representation of these patterns extend over many neurons. You
cannot locate precisely the representation of simple classes because they
superpose each other over many neurons. I think the ability to find
associations between patterns costs a lot of resources of the brain. But of
course this ability is one of the most fruitful one in human-like
intelligence and seems to be necessary for the behavior of creativity.




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to