Edward W. Porter wrote:
This is in response to Josh Storrs Monday, October 15, 2007 3:02 PM
post and Richard Loosemore’s Mon 10/15/2007 1:57 PM post.
I mis-understood you, Josh. I thought you were saying semantics could
be a type of grounding. It appears you were saying that grounding
requires direct experience, but that grounding is only one (although
perhaps the best) possible way of providing semantic meaning. Am I correct?
If I may interject: a lot of confusion in this field occurs when the
term "semantics" is introduced in a way that implies that it has a clear
meaning [sic]. Some in the AI community do indeed talk about
"semantics" as if the definition is sharply defined, but the more you
probe it, the more problems surface, until eventually you can get to the
point where you are chasing your own tail.
So if someone tries to talk about what the grounding problem is by
defining it in terms of semantics, I start to wonder what they're
putting on their cornflakes in the morning. The trivial sense of
"semantics" don't apply, and the deeper senses are so vague that they
are almost synonymous with grounding.
Moving on to what you say below, your comment about AI systems that have
been cloned is, I think, exactly correct. If something gets grounded
symbols as a result of the right kind of interaction with the world,
there is nothing to stop another system from also having grounded
symbols, provided it takes its knowledge structures AND knowledge
acquisition mechanisms from the first system. Just because System 2 did
not acquire its own knowledge from its own personal experience would not
be good grounds [sorry] for saying it is not grounded.
Richard Loosemore
I would tend to differ with the concept that grounding only relates to
what you directly experience. (Of course it appears to be a
definitional issue, so there is probably no theoretical right or
wrong.) I consider what I read, hear in lectures, and see in videos
about science or other abstract fields such as patent law to be
experience, even though the operative content in such experiences is
derived second, third, fourth, or more handed.
In Richard Loosemore’s above mentioned informative post he implied that
according to Harnad a system that could interpret its own symbols is
grounded. I think this is more important to my concept of grounding
than from where the information that lets the system do such important
interpretation comes. To me the important distinction is are we just
dealing with realtively naked symbols, or are we dealing with symbols
that have a lot of the relations with other symbols and patterns,
something like those Pei Wang was talking about, that lets the system
use the symbols in an intelligent way.
Usually for such relations and patterns to be useful in a world, they
have to have come directly or indirectly from experience of that world.
But again, it is not clear to me that they has to come first handed.
Presumably if the AGI equivalents of personal computers are being mass
produced by the millions 10 to 20 years from now, and if they come out
of the box with significant world knowledge that has been copied into
their non-volatile memory bit-for-bit from world knowledge that came
from the direct experience from many learning machines and indirectly
from massive sophisticated NL readings of large bodies of text and
visual recognition of large image and video data bases. I would
consider most of the symbols in such a brand new personal AGI to be
grounded -- even though they have not been derived from any experience
of a particular personal AGI, itself -- if they had meaning to the
personal AGI itself.
It seems ridiculous to say that one could have two identical large
knowledge bases of experiential knowledge each containing millions of
identically interconnected symbols and patterns in two AGI having
identical hardware, and claim that the symbols in one were grounded but
those in the other were not because of the purely historical distinction
that the sensing to learn such a knowledge was performed on only one of
the two identical systems.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54062897-0bef69