HI,

So my guess is that focusing on the practical level for building an agi
system is sufficient, and it's easier than focusing on very abstract
levels. When you have a system that can e.g. play soccer, tie shoe lases,
build fences, throw objects to hit other objects, walk through a terrain
to a spot, cooperate with other systems in achieving these practical goals

I agree that the human brain achieved these physical-world-focused
capabilities you mention in a way that caused it to relatively easily
be able to learn more abstract capabilities.

HOWEVER, it doesn't follow that any computer system that can achieve
these physical-world capabilities, will also relatively easily be able
to learn more abstract capabilities...

In creating AI systems there is a great tendency to overfit one's
system to the test tasks at hand.

Thus, for instance, humans may well use a wide range of cognitive
abilities in playing soccer -- but essentially all the current work on
robot soccer is useless for AGI, because it approaches soccer in a
manner that has nothing to do with general intelligence.

I agree that achieving capabilities like playing soccer and buliding
fences **within an AI system that is obviously extensible to encompass
more abstract capabilities** is a huge step toward AGI.  But achieving
these capabilities within an AI system, per se, means nothing about
progress toward AGI.

So if someone has an AI architecture, and says it will play soccer and
build fences, but cannot explain to me how/why this architecture will
generalize to handle more abstract and interesting capabilities --
then I will not be that excited about their work.

Thus, I believe that it makes sense to talk about more abstract
cognitive capabilities at the AGI design stage, not just about these
"primitive" capabilities.

Of course, I also feel that focusing solely on abstract capabilities
like language and logic, to the exclusion of more primitive
sensormotor-focused capabilities, has caused a lot of harm in AI.  So
far, systems that can perform moderately interesting tricks in the
former domains (proving simple theorems, parsing simple sentences,
etc.) have also been useless in terms of moving us in an AGI
direction.  This was the classic error of GOFAI systems.  However, in
reaction to this error, I feel that a bunch of contemporary
researchers (especially in the cognitive robotics domain) have adopted
an opposite error and begun creating systems that are overfit to
specific sensorimotor-focused tasks....

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to