I'll try to answer this and Mike Tintner's question at the same time. The 
typical GOFAI engine over the past decades has had a layer structure 
something like this:

Problem-specific assertions
Inference engine/database
Lisp

on top of the machine and OS. Now it turns out that this is plenty to build a 
system that can configure VAX computers or do NLP at the level of "Why did 
you put the red block on the blue one?" or "What is the capital of the 
largest country in North America?"

The problem is that this leaves your "symbols" as atomic tokens in a 
logic-like environment, whose meaning is determined entirely from above, i.e. 
solely by virtue of their placement in expressions (or equivalently, links to 
other symbols in a "semantic network").

These formulations of a top layer were largely built on introspection, as was 
logic (and the Turing machine!). So chances are that a reasonable top layer 
could be built like that -- but the underpinnings are something a lot more 
capable than token-expression pattern matching. there's a big gap between the 
top layer(s) as found in AI programs and the bottom layers as found in 
existing programming systems. This is what I call "Formalist Float" in the 
book. 

It's not that any existing level is wrong, but there aren't enough of them, so 
that the higher ones aren't being built on the right primitives in current 
systems. Word-level concepts in the mind are much more elastic and plastic 
than logic tokens.

You can build a factory where everything is made top-down, constructed with 
full attention to all its details. But if you try to build a farm that way, 
you'll do a huge amount of work and not get much -- your crops and livestock 
have to grow for themselves (and it's still a huge amount of work!).

I think that the intermediate levels in the brain are built of robotic body 
controllers, mechanism with a flavor much like cybernetics, simply because 
that's what evolution had to work with. That's my working assumption in my 
experiments, anyway.

Josh


On Monday 11 June 2007 04:41:13 am Joshua Fox wrote:
> Josh,
> 
> Your point about layering makes perfect sense.
> 
> I just ordered your book, but, impatient as I am, could I ask a question
> about this, though I've asked a similar question before: Why have not the
> elite of intelligent and open-minded leading AI researchers not attempted a
> multi-layered approach?
> 
> Joshua

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to