On 6/11/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
I'll try to answer this and Mike Tintner's question at the same time. The
typical GOFAI engine over the past decades has had a layer structure
something like this:

Problem-specific assertions
Inference engine/database
Lisp

on top of the machine and OS. Now it turns out that this is plenty to
build a
system that can configure VAX computers or do NLP at the level of "Why did
you put the red block on the blue one?" or "What is the capital of the
largest country in North America?"

The problem is that this leaves your "symbols" as atomic tokens in a
logic-like environment, whose meaning is determined entirely from above,
i.e.
solely by virtue of their placement in expressions (or equivalently, links
to
other symbols in a "semantic network").

These formulations of a top layer were largely built on introspection, as
was
logic (and the Turing machine!). So chances are that a reasonable top
layer
could be built like that -- but the underpinnings are something a lot more
capable than token-expression pattern matching. there's a big gap between
the
top layer(s) as found in AI programs and the bottom layers as found in
existing programming systems. This is what I call "Formalist Float" in the
book.

It's not that any existing level is wrong, but there aren't enough of
them, so
that the higher ones aren't being built on the right primitives in current
systems. Word-level concepts in the mind are much more elastic and plastic
than logic tokens.

You can build a factory where everything is made top-down, constructed
with
full attention to all its details. But if you try to build a farm that
way,
you'll do a huge amount of work and not get much -- your crops and
livestock
have to grow for themselves (and it's still a huge amount of work!).

I think that the intermediate levels in the brain are built of robotic
body
controllers, mechanism with a flavor much like cybernetics, simply because
that's what evolution had to work with. That's my working assumption in my
experiments, anyway.
Hi Josh,

You haven't explained how your "layered" approach works, but I think you
correctly exposed the problem of representation with logic-tokens.  My
solution to this is not exactly layers, but I see this as a difference
between "organic" and "inorganic" knowledgebases.  In Cyc, for example, the
facts you enter into the KB remain the exact same way with the logic-tokens
that you chose to enter them with.  This is what I call "inorganic".  In an
"organic" KB the facts will be *assimilated* into the KB via truth
maintenance (old-fashioned term) or belief revision or "cognitive dissonance
resolution", etc.  I think that mechanism is at the core of AGI.

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to