Hi,

> Actually, in attractor neural nets it's well-known that using random
> > asynchronous updating instead of deterministic synchronous updating does
> NOT
> > change the dynamics of a neural network significantly.  The
> attractors are
> > the same and the path of approach to an attractor is about the
> same.  The
> > order of updating turns out not to be a big deal in ANN's.  It may be a
> > bigger deal in backprop neural nets and the like, but those sorts of
> "neural
> > nets" are a lot further from anything I'm interested in...
>
> I'd rather get ride of the notion of "attractor" altogether. Though it may
> be useful for perception, in high-level cognition I don't see
> anything like
> it. Of course, some beliefs are more stable than others, but are
> they states
> to which all processes converge?

You have a point, which si why I didn't use the term "attractor" in the
Hebbian Logic paper.

The results I cited about attractor neural nets have to do with attractors.

But in the brain, or in a Hebbian Logic network, you don't have
attractors -- what you have are "probabilistically invariant subsets of
state space", i.e. subsets of the system's state space with the property
that, once a system gets in there, it's likely to stay there a while.
Attractors are a limiting case of this kind of state-space-subset, and
they're a limiting case that doesn't occur in the cognitive domain.

> > Hmmm....  Pei, I don't see how to get NARS' truth value functions out of
> an
> > underlying neural network model.  I'd love to see the details....  If
> truth
> > value is not related to frequency nor to synaptic conductance,
> then how is
> > it reflected in the NN?
>
> What I mean is not that NARS, as a reasoning system, can be (partially or
> completely) implemented by a network, but that NARS can be seen as a
> network --- though different from conventional NN.
>
> I think NN is much better than traditional AI in its philosophy --- I like
> parallel processing, distributed representation (to a certain extent),
> incremental learning, competing results, and so on. However,
> ironically, the
> techniques of NN is less flexible than symbolic AI. I don't like
> NN when it
> uses fixed network topology, has no semantics (and even claims it to be an
> advantage), takes the goal of learning as converging to a
> function (mapping
> input to output), does global updating, uses "activation" for both logical
> and control purposes, and so on.
>
> My way to combine the two paradigms is not to build a hybrid
> system that is
> part symbolic and part connectionist, but to build a unified
> system which is
> similar to symbolic AI in certain aspects, and similar to NN in some other
> aspects.

Firstly, Novamente is not a hybrid system that's part symbolic and part
connectionist, either.

Webmind was, but Novamente isn't anymore.  There's no more association
spreading or activation spreading; these NN-like processes have been
replaced by specialized deployments of PTL (probabilistic reasoning
Novamente-style).

Novamente does hybridize a bunch of things: BOA learning, combinatory logic,
PTL inference, etc. ... but not any NN stuff....

Second, I do not advocate neural nets as an approach to AI, either.  I think
the approach has its merits, but overall I think that NN's are a really
inefficient way to use von Neumann hardware.

If we knew enough to *really* emulate the brain's NN in software, then the
guidance offered by the brain would be so valuable as to offset the
inefficiency of implementing massively-parallel-wetware-oriented structures
and algorithms on von Neumann hardware.  But we don't know nearly enough
about the brain to make brain-emulating NN's; and the currently popular NN
architectures seem to satisfy neither the goal of brain emulation, nor the
goal of efficient/effective AI.

My point in articulating Hebbian Logic is NOT to propose it as an optimally
effective approach to AI, but rather to propose it as a conceptual solution
to the conceptual problem of: **How the hell do logical inference and
related stuff emerge from neural networks and other brainlike stuff?**

No one in cognitive science seems to have a good explanation of this, beyond
the really vague handwaving level.  I think that the Hebbian Logic approach
provides a significantly better explanation than anyone else has given so
far.  Even given that it also involves a bunch of handwaving (since I didn't
work out all the technical details, and probably won't do so soon due to my
own time limitations).

Hebbian Logic *might* be a decent approach to practical AI --- I don't think
it would be a terribly stupid approach --- but I like the Novamente approach
better...

-- Ben


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to