Hi,

My own approach is to design a cognitive architecture that has "elements"
that look somewhat like neurons in some respects, but which have some
properties that make it easy for them to combine in such a way as to
represent abstract knowledge.

This sounds nice ... but what are these properties?

The simplest way to understand that is to
see them as atoms that can make transient links to one another:  the
overall design of teh system is rather like a molecular soup in which
concepts (elements) can combine to represent complex, strauctured ideas,
and in which the ways that they combine (the bonds) are mediated by other
concepts that represent relationships.  There are simple mechanisms that
allow the system to elaborate these representations in various ways,
according to context, so for example a new concept can form from an
existing simple concept by allowing one or more of the relationships in the
structure to become slippable:  a bachelor is an unmarried man (simple
definition), but then if someone were to stretch the concept to breaking
point by referring to a woman as a bachelor, my existing bachelor element
is allowed to unpack itself (= retrieve the original, extended set of
elements and relationship out of which it was formed) and substitute for
one of the main concepts a new one that in some sense plays the same role
(and role-playing in this sense is just a matter of being able to form
similar relationships to the old).  Thus:  "man" gets ripped out and
replaced by "woman", but with a hint of the idea that the speaker was
trying to imply some other features of bachelorhood that would normally be
characteristic of a man but which in this case are true of the woman.

At the level of this description, I can say that all the things you
describe in the above paragraph occur within Novamente.  We have nodes
and relationships and some specific algorithms for locally acting on
the network of nodes and relationships to form new ones, and among
these algorithms are ones that do the things you mention above...

However, I don't know how to make a really useful bridge between this
kind of concept-level formalism and neuron-level structures and
dynamics.  I'm sure it is POSSIBLE but I don't know how to do it.  I
don't need to build such a bridge for my own Novamente work but I am
curious about how it's done....

Although a circuit made of neurons can in principle be used to design any
kind of architecture (including architectures that are completely and
utterly non-neural), my own philosophy is that the likely design of those
circuits will neveretheless retain many characteristics of the behavior of
neurons themselves ... relaxation, for instance.

It is clear that some simple aspects like relaxation and Hebbian
learning transfer easily from the neuron level to the concept level.
I'm with you there!

My question pertained to more complex cognitive aspects related to
quantifiers, relationships-among-relationships, multi-variable
functions, and so forth -- and how they are ultimately grounded in the
neuron level in the brain.

So far nothing you have said has addressed this issue, though
everything you have said seems sensible to me...

So my "neurons" are not neurons, but collections of neurons.  Those
collections are still neuron-like, however, and because of that I am able
to have my connectionist cake and still eat the rich fruit of structured,
abstract representations.

Agree, and this aspect of your architecture is shared by Novamente and
Joshua Blue, among other approaches.

But the question still exists (not necessary, but interesting, to be
answered) how this more abstract sort of connectionist concept network
**considered in the context of the human brain** might emerge from the
brain's network of actual physical neurons.

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to