It seems that your 'layered hierarchy' approach is very similar to Rod
Brooks subsumption architecture. This has been used to good effect in
generating natural behaviours in robotics but has not been very useful
in developing higher level cognition.

Or maybe you are suggesting something else?

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Alan Grimes
Sent: 17 January 2003 08:09
To: [EMAIL PROTECTED]
Subject: [agi] Unmasking Intelligence.

om 

I seem to have fallen into the list-ecological niche of good discussion
starter. In that capacity I write the following. 

I attended my first session of CS480: Introduction to Artificial
Intelligence, this morning and it got me to thinking about something
that has started to bug me...

What if one of the techniques already in use was the real solution but
we just don't know it 'cuz it has never been integrated into a system
which would behave in a way that we could recognise?

In studying neuroscience I have learned that human intelligence has
evolved as a layer on top of lower behavior generators..

Instead of a simple model like: 

SENSES >> BRAIN >> BEHAVIOR...

We rather have a heirarchy of systems stacked on top of each other: 


SENSES >> SPINAL CHORD >> BEHAVIOR.
           \/  /\
         Brain steam. 
           \/ /\
          Midbrain
           \/ /\
          Hypocampus
           \/ /\
 primitive areas of the neocortex
           \/ /\
 Higher areas of the neocortex.

.............. [eek, drew it upside down, not worth fixing]

The book that I have been reading uses the terminology "modulate" to
describe this process. It is saying that each system in the stack above 
"modulates" the layer above/below it. In this context, modulate means to
add information/complexity to. 

This creates a problem for the AI researcher in that it is not clear at
all what the top layers do because their "signal" is obscured by the
functions of the lower centers...

Hopefully recient work in growing cortical tissues on silicon will help
elucidate what the heck is going on! =P 


The other side of this problem is equally interesting.

Lets say that we were perfectly sucessful in creating an artifical
mind-matrix and in training this mind matrix in numerous radically
different programming paradigms. 

Lets say it knew smalltalk, Pascal, Lisp, Prolog, Assembler, and Forth.
Lets say that it also has a general knowlege of math, computer
organization, and algorithmics. 

We intend to direct this matrix to do either of the following: 

1. Optomize a program in language A.
2. Translate a program in a language A into a specific language A' from
the list above. 


This is an example of a motovation problem. We need a way to motovate
the system to do the translation even though there is no way to
specificly instruct the matrix to doso. 

What is needed is something akin to the lower levels in the above
diagram, A way to organize the behavior in the matrix in a goal-oriented
fassion. 

The term I have given this process of writing programs to trigger events
in a cognitive matrix "Cybernetic Programming" where the program is a
set of concepts rather than a specific program such as in traditional
programming paradigms. Cybernetic programs can use fragments of a
cognitive matrix as well as we see in the lymbic association areas of
the brain such as the penninsular cortex, the cingulate gyrus, and the
parahypocampal gyrus.

In my thinking about this subject I have come up with the following
principals of cybernetic programming (and mind-organization in general)
which probably aren't of any use. =P

SYMETRY: All output channels are associated with at least one
input/feedback mechanism.

SEMANTIC RELATIVITY: The primary symantic foundation of the system is
the input and output systems. (almost everything is expressed in terms
of input and output at some level..)

TEMPORALITY: Both input and outputs have a semanticly significant
temporal component. 

Reciently, I've added this other observation to my folder full of
handwritten notes: 

CHANNEL INDEPENDANCE PROBLEM: The naieve implementation of abstraction,
such as FORTH, (semantic relativity) tends to be strongly bound to an
exact pattern match or a close match on a specific input channel. The
brain is clearly more flexable than this so there must be a way to
express abstractions in the form of an independant relation that can be
applied to any input or output channel. (this is the heart-center of
"pattern recognition" etc...)

-- 
Linux programmers: the only people in the world who know how to make
Microsoft programmers look competent.
http://users.rcn.com/alangrimes/

-------
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to