On Mon, Jun 9, 2008 at 11:20 PM, Ricky Loynd <[EMAIL PROTECTED]> wrote:
> Vladimir, that's a nice, tight overview of a design.  What drives the
> creation/deletion of nodes?
>

In current design, skills are extended through relearning and
fine-tuning of existing circuits. Roughly, new memories are expected
to form at the mutual boundaries of areas of network where usual
activation patterns are produced. At the boundaries, unusual
combinations of these different usual patterns are brought together,
which is captured in concepts of boundary nodes and can be
subsequently imitated and generalized by them. This way, new memories
can form anywhere, depending on typicality of activation in that area.
Of course, something is rewritten by new memories, but mainly concepts
that participate in them, inactive concepts are changed very rarely.
The same piece of knowledge forms in many places at the boundary, so
there is redundancy. And in general, network mainly imitates itself,
so I expect more redundancy at other levels. Gradual introduction of
new nodes over the whole inference surface or around the activity
areas may be useful.

Node removal is tricky. Strictly speaking, it is unnecessary, and can
provide only optimization. There are two kinds of nodes that are
candidates for removal: nodes that are inactive and will remain so
indefinitely, and nodes that provide unnecessary redundancy. Redundant
nodes can be limited by globally limiting the amount of concurrent
activation. If such limit is always present, and only changes slightly
over time, knowledge representation will adapt to keep necessary
information within budget, and so won't produce too much redundancy.
Inactive nodes may be controlled by adding some kind of requirement on
recall dynamics from newly formed concepts: e.g. recall at least once
in x tacts, then at least once in 4x tacts, then 16x tacts, etc. I
plan to apply such test to protecting nodes from rewriting, rather
then from removal, with unprotected concepts having higher chance of
being adjusted dramatically, capturing episodic memories. Or maybe
experiments will show that it's unnecessary, for example recalled
concepts may produce enough redundancy through secondary memories to
preserve the skill even in the face of constant-rate risk of node
reset.

One of the reasons why I use maximum margin clustering is that
inference needs to be resilient to changes in the network structure:
when something changes, a concept can adapt to that change, if it only
brings its input a little bit out of the usual range. This allows the
skillset to be adjusted at any level *locally*, without loosing
functionality in other dependent parts. The idea is to oppose
brittleness of software, while preserving some of its expressive
power. (This kind of automatic programming is not at the core of the
design, nor is it an extension of the design, but rather it's another
perspective from which to view it.)

-- 
Vladimir Nesov
[EMAIL PROTECTED]


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to