Mark,

Hmmm....  In this conversation, we seem to be completely talking past
each other and not communicating meaningfully at all...

You say that

    In most "blackboard" systems (i.e. those where all processes share the
same collection of "active knowledge") and, more particularly, in 100% of
those that are generally considered to be well designed, the individual
processes are all forced to follow certain standardized rules (enforced by
the active knowledge collection itself, NOT the processes) so that the
processes not only don't step on each other, but they CAN'T step on each
other.  Due to these rules, the individual processes themselves can't have
ANY parameters that when tweaked can possibly cause them to interfere with
other processes.  If your design has this problem (i.e. that the active
knowledge collection does not adequately protect itself), then you have a
sub-optimal design.  "Blackboard" systems have been around for decades
longer than AGI systems (which are just a very complex sub-class of
"blackboard" systems) and there is a considerable body of work that pretty
definitively shows that any design with the behaviors that you are
describing CAN be optimized so that it doesn't exhibit those negative
behaviors without losing any functionality.

Apparently you are using the word "interfere" in a radically different
sense than I would in this context, because if I interpret the word
"interfere" in the way I naturally would, then the above paragraph
reads like complete insanity!!

The fact that different cognitive processes "interfere" with each
other [i.e., significantly and complexly affect each others'
activities] is NOT a flaw, it is intentional and it is VERY VERY
NECESSARY.

AI architectures in which cognitive processes do not significantly and
complexly affect each other, via their interactions on a common data
store, are NOT going to be capable of achieving powerful AGI given
limited computational resources...

If classical blackboard systems have so much modularity that each
cognitive process can be tuned independently of the others --- then
very likely these blackboard systems are incapable of giving rise to
the complex emergent dynamics that characterize intelligence.  This is
one of the reasons why Novamente is not a classical blackboard
system...

As an example, consider two cognitive processes, acting concurrently
on a common data store:

* concept creation [by blending existing concepts via various heuristics]
* probabilistic inference

The parameters of the concept creation process govern which sorts of
concepts tend to be created -- how different they tend to be from
existing concepts, how general they tend to be, how many created
concepts are related to current goals and how many are just "generally
interesting" etc.

The parameters of the probabilistic inference process govern what
sorts of inferences tend to be drawn -- including such aspects as how
speculative the inferences are, how much effort is spent on a few
highly complex and abstract inferences versus large masses of smiple
inferences etc.

And, it is very obvious that the parameters of these two cognitive
processes DO affect  each other.  The kinds of concepts needed to
drive highly abstract inferences are different in various subtle ways
than the kinds of concepts needed to drive simple inferences, to give
just one example....  And if highly speculative inferences are to be
focused on, then more whacky and speculative "conceptual blends" are
going to be more valuable...  Etc., etc. etc.

The interactions between the parameter-tunings of different cognitive
processes are necessary in order to make the artificial mind function
effectively as a coherent whole.

If you don't agree with this, then indeed we have fundamentally
different intuitions aboutu AGI design.  This is fine, but I want to
be clear that the complex interactions between Novamente cognitive
processes is not an accident, nor a compromise made for performance
reasons -- it is a decision made because I believe this kind of
interaction is a critical aspect of general intelligence given limited
computational resources.

I am still not sure whether we really disagree, or are just using
words differently -- or some combination of the two.  Talking about
this kind of thing can be very, very hard, due in large part to the
lack of a commonly-understood, precisely-defined vocabulary.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to