Gary Miller wrote:
***
I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

...

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.
***

What you're describing is the "Expert System" approach to AI, closely
related to the "common sense" approach to AI.

Cycorp takes this point of view, and so have a whole lot of other AI
projects in the last few decades...

I certainly believe there's some truth to it.  If you encoded a chemistry
textbook in formal logic, fed it into an AI system, and let the AI system do
a lot of probabilistic reasoning and associating on the information, then
you'd have a lot of speculative uncertain "intuitive" knowledge generated in
the system, complementing the "hard" knowledge that was explicitly encoded.
If you encoded a physics textbook and a bio textbook as well, you could have
the system generate uncertain, intuitive cross-domain knowledge in the same
way.

In fact, we are doing something like this in Novamente now, for a
bioinformatics application.  We're feeding in information from a dozen
different bio databases and letting the system reason on the integrated
knowledge....  right now we're at the "feeding in" stage.

Unlike some anti-symbolic-AI extremists, I think this sort of thing can be
*useful* for AGI.  But I think it can only be a part of the picture.
Whereas I think experience-based learning is a lot more essential....

I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think deeply
and creatively about any domain -- even a technical domain about science.
What's missing, among other things, is the intricate interlinking between
declarative and procedural knowledge.  When humans learn a domain, we learn
not only facts, we learn techniques for thinking and problem-solving and
experimenting and information-presentation .. and we learn these in such a
way that they're all mixed up with the facts....  In theory, I believe, all
this stuff could be formalized -- but the formalization isn't pragmatically
possible to do, because we humans don't explicitly know the techniques we
use for thinking, problem-solving, etc. etc.   In large part, we do them
tacitly, and we learn them tacitly..

When we learn a new domain declaratively, we start off by transferring some
of our tacit knowledge from other domains to that new domain.  Then, we
gradually develop new tacit knowledge of that domain, based on experience
working in the domain...

I think that this tacit knowledge (lots of uncertain knowledge, mixing
declarative & procedural) has got to be there as a foundation, for a system
to really deploy factual knowledge in a creative & fluent way...


***
 I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.
***

I think this is a good description of one among  many processes involved in
language generation...

I also think there's some more complex unconscious inference going on, than
is implied by your statement.  It's not a matter of "cutting and pasting
into a template", it's a matter of recursively applying a bunch of syntactic
rules that build up complex linguistic forms from simpler ones.  The
syntactic buildup process has parallels to the thought-buildup process, and
the two sometimes proceed in synchrony, which is one of the reasons
formulating thoughts in language can help clarify them.

I dealt with some of these issues -- on a conceptual, not an
implementational level - in a chapter in my book "from complexity to
creativity", entitled "Fractals and Sentence Production":

http://www.goertzel.org/books/complex/ch9.html

If I were to rewrite that chapter now, it would have a lot of stuff on
probabilistic inference & unification grammars -- richer and better details,
enhanced by the particular math underlying Novamente and a few years more
experience playing with computational linguistics ... but it would have the
same theme.

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to