I think it is even more complicated. The flow of signals in the brain does
not move only from low levels to high levels.
The modules communicate in both directions. And as far as I know there is
already evidence for this from cognitive science.

If you want to recognize objects in pictures you need to find the edges or
boundaries. But the other direction works too. If you know the object
because someone tells you what is on the picture or because you use other
knowledge about the picture then it is easier for you to detect the edges of
the object.

A thought experiment is a good idea.
Let's say we have a robot in the garden and ask him:
How many apples are on the tree?

The robot is assumed to be experienced, i.e. it should have a sufficient
world model to understand and answer the question.

I make this assumption at this point, because first we have to answer the
question where we want to go. In the following I describe a hypothetical
process in the robot's brain. Note that I assume the robot has learned most
of this process (classes, interactions of objects) with past experiences.
But of course some classes and information flows it must have had from its
first day on.

Ok. The robot gets the sound wave and its low level modules try to recognize
known patterns in this wave.

First it recognizes a voice pattern.

This triggers a voice object. This triggers different objects. For example:
A speech object, an information object, a person object and perhaps a lot of
other objects.


The person object analyzes the sound wave only to obtain information who is
speaking. The speech object only tries to figure out what language is
spoken. But here is already a trick. The person object detects that the
voice comes from person Matt. And the person object has the value "English"
in its attribute "language". The objects inform each other in parallel about
their values and the speech object receives the value "English" from the
person object. By this it is easier for the speech object to recognize the
language because it can use a useful hypothesis and it will activate certain
English tester objects. All these objects make their own analysis and use
information about results of other objects. 

After a short time, certain important objects are active:

A question object of the type "quantity question".
Word objects of different grammar types with values 
How
Many
Apples
APPLIES
Are
On 
The
Tree

There is something special with the words APPLES and APPLIES.
They have the same number attribute value (=third word in the question) and
they have a probability value of 50%.
This means that the robot is not quite sure whether the third word was
APPLES or APPLIES.

The question object is already a higher level object. It does not use the
sound wave input but the set of active word objects.

The question object contains a subject object which itself contains a
GrammarSubject object and a GivenHints object. It has to decide whether the
subject is APPLES or TREE. 
The robot knows from past experience that subjects of quantity questions are
in plural. For any attribute of any object there is a setter method with a
learnable validate function. So the subject object accepts only the word
APPLES for its GRammarSubject object.

This fact also increases the probability value of the word APPLES and
decreases the probability for APPLIES.

Finally the robot has the complete question object which activates a goal
object: Answer the question!

This was just the low level. At this point the robot must understand what he
really shall do.

He knows from experience that he gets reward if it answers the active
question object whenever a corresponding goal object is active.

An answer for a quantity question must be a number.
The number is the result of a count process which corresponds to  the
subject of the quantity question.

Ok. We are in one of the medium levels of AGI. And I already wonder how our
robot should have learned the low level I described so far.
And I stop here because everything is too complex now.

But these thought experiments are strongly necessary if we want to create
AGI........ 


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>



-----Ursprüngliche Nachricht-----
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 3. Mai 2008 01:27
An: agi@v2.listbox.com
Betreff: [agi] Re: AW: Language learning

--- "Dr. Matthias Heger" <[EMAIL PROTECTED]> wrote:
> So the medium layers of AGI will be the most difficult layers.

I think if you try to integrate a structured or O-O knowledge base at
the top and a signal processing or neural perceptual/motor system at
the bottom, then you are right.  We can do a thought experiment to
estimate its cost.  Put a human in the middle and ask how much effort
or knowledge is required.  An example would be translating a low-level
natural language question to a high level query in SQL or Cycl or
whatever formal language the KB uses.

I think you can see that for a formal representation of common sense
knowledge, that the skill required for this interface is at a higher
level than the knowledge actually represented at the top level.  If
this knowledge was stored in the human brain, then it could be
retrieved faster, and by someone who had no special skills in
understanding a formal language.

But there are still some applications where this design makes sense. 
One example would be a calculator.  At the low level, you have a
question like "how many square inches in a third of an acre?"  The
middle level converts this to an equation and punches the numbers into
the top level calculator.  This is preferable to the human doing the
arithmetic.  A database would be another example.

Where it doesn't make sense is when the top level is doing something
that humans are already good at.  It would make more sense to figure
out how humans learn and represent common sense instead of guessing. 
We can do experiments in cognitive psychology.  What can people learn?
remember? perceive?


-- Matt Mahoney, [EMAIL PROTECTED]


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to