On Dec 26 Ben Goertzel said:

>> One basic problem is what's known as "symbol grounding".  this 
>> means that an Ai system can't handle semantics, language-based 
>> cognition or even advanced syntax if it doesn't understand the 
>> relationships between its linguistic tokens and patterns in the 
>> nonlinguistic world.

I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in 
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and 
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.  

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

Each time we repeat the process are we not grounding the previous
knowledge further because the bot can now reason and respond to
questions not just about hydrogen, it now has an English representation
of the relationship between hydrogen and other related concepts in the
physical world..

If we were to teach someone such as Helen Keller with very limited
sensory inputs would we not be attempting to do the same thing?

Humans of course do not learn in this exhaustive manner.  We get a
shotgun bombardment of knowledge from all types of media on all manner
of subjects.  The things that interest us we pursue additional knowledge
about.  The more detailed our knowledge in any given area the greater we
say our expertise 
is.  Initially we will be better grounded than a bot, because as
children we learn a little bit about a whole lot of things.  So anything
new we learn we attempt to tie into our semantic network.  

When I think.  I think in English.  Yes, at some level below my
conscious awareness these English thoughts are electrochemically
encoded, but consciously I reason and remember in my native tongue or I
retrieve a sensory image, multimedia if you will.

If someone tells me that "A kinipsa is terrible plorid".  I attempt to
determine what a kinipsa and a plorid are so that I may ground this
concept and interconnect it correctly within my existing semantic
network.  If A bot is taught to pursue new knowledge and ground the
unknown terms with it's existing semantic net by putting the goals "Find
out what a plorid is" and "Find out what a kinipsa is" on it's list of
short term goals then it will ask questions and seek to ground itself as
a human would!

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.

When a child learns to speak he does not have a large dictionary to draw
on to tell him that "mice" is the plural of "mouse".  No rule will tell
him that.  He has to learn it.  He will say mouses and someone will
correct him.  It gets added to his NLP database as an exception to the
rule.  A human has limited storage so a rule learned by generalizing
from experience is a shortcut to learning and remembering all the plural
forms for nouns.  In a AGI we can give the intelligence certain learning
advantages such as these dictionaries and lists of synonym sets which do
not take that much storage in the computer.  

I also think that children do not deal with syntax.  They have heard a
statement similar to what they want to express and have this stored as a
template in their minds.  I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.  For people who have to learn a foreign language as an
adult this is difficult because they tend to think in their first
language and commingle the templates from their original and the new
language.  But because we do not parse what we here and read strictly by
the laws of syntax we have little trouble understanding many of these
ungrammatical utterances.
 
 


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of [EMAIL PROTECTED]
Sent: Thursday, December 26, 2002 11:03 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Early Apps.



On 26 Dec 2002 at 10:32, Gary Miller wrote:

> On Dec. 26 Alan Grimes said:
> 
> >> According to my rule of thumb,
> >> "If it has a natural language database it is wrong", 
>  
> Alan I can see based on the current generation of bot technology why 
> one would feel this way.
> 
> I can also see people having the view that biological systems learn 
> from scratch so that AI systems should be able to also.
> 
> Neither of these arguments are particularly persuasive though based on

> what I've developed to date.
> 
> Do you have other arguments against a NLP knowledge based approach 
> that you could share with me.

One basic problem is what's known as "symbol grounding".  this means
that an Ai system 
can't handle semantics, language-based cognition or even advanced syntax
if it doesn't 
understand the relationships between its linguistic tokens and patterns
in the nonlinguistic 
world.

However, this problem doesn't totally rule out use of a linguistic DB.
One could imagine 
supplying a system with a linguistic DB and having it learn groundings
for the words and 
structures in the DB...

Another problem is what I call the "knowledge richness" problem.

The basic idea here is that if a system learns something through
experience, it then is likely 
to know that something in an adaptable, adjustable way.  Because it
knows not only the 
thing itself, but a bunch of other things in the neighborhood of that
thing, various useful 
components and superstructures of the thing, etc.  it knows these other
related things as 
side-effects of the learning process.

On the other hand, if a system learns something through reading out of a
DB, it doesn't 
have this surround of related things to draw on, so it will be far less
able to adapt and build 
on that thing it's learned...

My view is that a linguistic DB is not necessarily the kiss of death for
an AGI system -- but I 
don't think you can build an AGI system that has a DB as its *primary
source* of linguistic 
knowledge.  If an AGI system uses a linguistic DB as one among many
sources of linguistic 
information -- and the others are mostly experience-based -- then it may
still work, and the 
linguistic DB may potentially accelerate aspects of its learning..

Ben G 

-------
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to