The crux of the problem is this: what should
be the fundamental elements used for knowledge representation. Should they
be statements in predicate or term logic, maybe with the addition of
probabilities and confidence? Should they be neural-net-type learned
functional mappings? Or should they be some kind of modeling system that
can replicate the three-dimensional, temporal physical world (like a global
weather-modeling system)? These are just some of the options, but isn't
this choice the foundation for creating real understanding
in AI?
Several people wrote:
James Below Shouls be Jef, but
I will respond as well
Orig Quotes: > But the computer still
doesn't understand the
sentence, because it > doesn't know what cats, mats and the act of
sitting _are_. (The best > test of such understanding is not language -
it's having the > computer draw an animation of the action.)
Russell, I agree, but
it might be clearer if we point out that humans don't understand the
world either. We just process these symbols within a more encompassing
context. - Jef
Me, James: Understand is probably a red
flag word, for computers and humans alike. We have no nice judge of what
is understood, and I try not to use that term generally, as it devolves into
vague phsycho talk, and nothing concrete.
But basically, a
computer can do one of two things to "show" that it has "understood"
something; 1. either show its internal representation. You said cat,
I know that cat is a mammal that is blah, and blah, and does blah, some cats I
know are blah. 2. It acts upon this information, "Bring me the cat"
is followed by the robot bringing the cat to you, it obviously "understands"
what you mean.
I believe with a very rich frame system of memory that
will start a fairly good understanding of "What" somethings "means" and allow
some basic "understanding".
At the basest level a "cat" can only mean a
certain few things, maybe using the WordNet ontology for filtering that
out. The depending on context and usage, we can possibly narrow it down,
and use the Frames for some basic pattern matching to narrow it down to the
one. And, maybe if it cant be narrowed successfully, something else should
happen, either model internally both or multiple objects / processes, or
get outside intervention where available. We should remember that there are
almost always humans around, and SHOULD be used in my opinion. Either if
they are standing by the robot, then they can be quizzed directly, or if it is
not a immediate deceision to be made, ask them via email or a phone call or
something, and try to learn that information given so next time it will not
have to ask. EX: "Bring me the cat." Confusion in the AI,
seeing 4 cats in front of it. AI: Which cat do you want?
resolve abiguity thru interface.
James Ratcliff
Eric
Baum wrote:
James>
Jef Allbright <[EMAIL PROTECTED]>wrote: Russell Wallace James>
wrote:
>> Syntactic ambiguity isn't the problem. The reason
computers don't >> understand English is nothing to do with syntax,
it's because they >> don't understand the world.
>>
It's easy to parse "The cat sat on the mat" into
>> sit
cat >> James> on
>> mat past >>
>> But the computer still doesn't understand the sentence,
because it >> doesn't know what cats, mats and the act of sitting
_are_. (The >> best test of such understanding is not language -
it's having the >> computer draw an animation of the
action.)
James> Russell, I agree, but it might be clearer if we
point out that James> humans don't understand the world either. We
just process these James> symbols within a more encompassing
context.
James, I would like to know what you mean by
"understand". In my view, what humans do is the example we have of
understanding, the word should be defined so as to have a reasonably
precise meaning, and to include the observed phenomenon.
You
apparently have something else in mind by
understanding.
----- This list is sponsored by AGIRI:
http://www.agiri.org/email To unsubscribe or change your options, please
go
to: http://v2.listbox.com/member/?list_id=303 [EMAIL PROTECTED]>
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
|