This is exactly backward, and which makes using it as an unqualified
presumption a little odd. Fetching an object from true RAM is substantially
more expensive than executing an instruction in the CPU, and the gap has
only gotten worse with time.
That wasn't my point, which you may have
Ben, you haven't given us an update on how things
are going with the Novamente A.I. engine lately. Is this because progress
has been slow and there is nothing much to report, or you don't want to get
peoples hopes up while you are still so far from being done, or that you want to
surprise
Brad,
Hmmm... yeah, the problem you describe is actually an implementation issue,
which is irrelevant to whether one does synchronoous or asynchronous
updating.
It's easy to use a software design where, when a neuron sends activation to
another neuron, a check is done as to whether the target
Guess I'm too used to more biophysical models in which that approach won't
work. In the models I've used (which I understand aren't relevant to your
approach) you can't afford to ignore a neuron or its synapses because they
are under threshold. Interesting dynamics are occurring even when the
Yep, you're right of course. The trick I described is workable only for
simplified formal NN models, and for formal-NN-like systems such as Webmind.
It doesn't work for neural nets that more closely simulate physiology, and
it also isn't relevant to systems like Novamente that are less NN-like
Actually, in attractor neural nets it's well-known that using random
asynchronous updating instead of deterministic synchronous updating does
NOT
change the dynamics of a neural network significantly. The attractors are
the same and the path of approach to an attractor is about the same. The
Hi,
Actually, in attractor neural nets it's well-known that using random
asynchronous updating instead of deterministic synchronous updating does
NOT
change the dynamics of a neural network significantly. The
attractors are
the same and the path of approach to an attractor is about the
Ben,
Some comments to this interesting article:
*. S = space of formal synapses, each one of which is identified with a
pair (x,y), with x Î N and y ÎNÈS.
Why not x ÎNÈS?
*. outgoing: N à S* and incoming: N - S*
Don't you want them to cover higher-order synapses?
*. standard neural net
Pei,
Thanks for your thoughtful comments! Here are some responses...
-
*. S = space of formal synapses, each one of which is identified with a
pair (x,y), with x Î N and y ÎNÈS.
Why not x ÎNÈS?
-
No strong reason -- but, I couldn't see a need for that degree of generality
in
Hi,
For those with the combination of technical knowledge and patience required
to sift through some fairly mathematical and moderately speculative cog-sci
arguments... some recent thoughts of mine have been posted at
http://www.goertzel.org/dynapsyc/2003/HebbianLogic03.htm
The topic is:
**How
10 matches
Mail list logo