Hi,

My concern about G0 is that the problem of integrating first order logic or
structured symbolic knowledge with language and sensory/motor data is
unsolved, even when augmented with weighted connections to represent
probability and/or confidence (e.g. fuzzy logic, Bayesian systems,
connectionist systems).

Interestingly, this is what Ari Heljakka and I have been working on
quite recently, in a Novamente context.

In the context of sensations and actions in the AGISim 3D simulation
world, I can assure that integrating probabilistic term logic with
sensorimotor data and the output of a language parser actually does
work....  Though it is certainly complicated.

People have been working on weighted
graphs in various forms for over 20 years.  If there was an easy solution,
we should have found it by now.  I did not see any proposed solution in G0.

There is a solution in Novamente.  But it is not that easy, indeed.

In terms of interfacing with sensorimotor data, there is a "pattern
mining" layer that looks for patterns in sensorimotor data and exports
these patterns as predicates into the knowledge base, for reasoning to
act upon.  It is guided in its search for patterns by cognitive
knowledge.  It doesn't do anything that logical inference couldn't do,
but it is designed to look for simple sorts of repetitive
spatiotemporal patterns scalably and quickly.

In terms of generating actions, there is a "predicate schematization"
component that uses some heuristics to translate logical knowledge
about actions and consequences into executable programs for carrying
out actions.

None of this is tremendously easy but none of it is horribly
complicated either.  Honestly, this stuff is not as hard as the
problem of effectively controlling complex inference chains.
Integrating probability theory into logic is not that easy either
(three colleagues and I are about done with a 300 page book on this
subject), but the really hard thing is using probabilistic data
gathered from studying the past history of cognitions to help guide
future inference chains.  I.e., probability-driven inference control.
This is tough, though I think I know how to handle it.  Right now the
inferences Novamente is doing in an AGISim -control context are not
all that terribly complex (playing fetch, finding objects, etc.), so
inference control is manageable, but we will need to implement some of
our more sophisticated experience-based inference control designs to
allow it to do dramatically cleverer things...

First order logic is powerful, but that does not mean it is correct.  I
think it is an oversimplification, and we are discarding something essential
for the sake of computational efficiency.  The fact that you can represent
Kicks(x,y) means that you can represent nonsense statements like "ball kicks
boy".  This is not how people think.  A person reading such a statement will
probably reverse the order of the words because it makes more sense that
way.  How would a symbolic system do that?

Of course, it is very easy to logically represent the knowledge that
to kick in the literal sense, one must have feet or legs...

I think AGI will be solved when we do two things.  First, we must understand
what is going on in the human brain.  Second, we must build a system with
enough hardware to simulate it properly.

Clearly that is one route to AGI, but you have not argued that it is
the only route nor even the best.

Brain emulation is for wimps, I say!!!  ;-)

Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to