Amusingly, one of my projects at the moment is to show that
Novamente's "economic attention allocation" module can display
Hopfield net type content-addressable-memory behavior on simple
examples.  As a preliminary step to integrating it with other aspects
of Novamente cognition (reasoning, evolutionary learning, etc.)

Those interested in Hopfield nets may want to look up Daniel Amit's
old book "modeling brain function"

http://www.amazon.com/Modelling-Brain-Function-Attractor-Networks/dp/0521421241/sr=1-3/qid=1164641397/ref=sr_1_3/002-6495259-3104828?ie=UTF8&s=books

which goes way beyond the fixed-point attractors John Hopfield focused
on, and discusses at length strange attractors in neural nets with
asymmetric weights.

This work was inspirational for Novamente, which is intended to show
similar attractor-formation effects through the flow of "artificial
currency" (allocated among knowledge items and relationships via
probability theory) rather than the flow of "simulated neural net
activation."

An issue with Hopfield content-addressable memories is that their
memory capability gets worse and worse as the networks get sparser and
sparser.   I did some experiments on this in 1997, though I never
bothered to publish the results ... some of them are at:

http://www.goertzel.org/papers/ANNPaper.html

The probability/economics approach used in Novamente enables the same
sort of attractor formation but with better behavior under realistic
network sparsity...

Novamente however does not rely on attractors as the sole method of
memory storage.  Rather, it uses logical knowledge representation, but
then also uses attractors of logical atoms (under
economic-attention-allocation dynamics) to represent an "upper layer"
of more fluid knowledge.

-- Ben G

On 11/27/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
On Sunday 26 November 2006 18:02, Mike Dougherty wrote:

> I was thinking about the N-space representation of an idea...  Then I
> thought about the tilting table analogy Richard posted elsewhere (sorry,
> I'm terrible at citing sources)  Then I starting wondering what would
> happen if the N-space geometric object were not an idea, but the computing
> machine - responding to the surface upon which it found itself.  So if the
> 'computer' (brain, etc.) were a simple sphere like a marble affected by
> gravity on a wobbly tabletop, the phase space would be straightforward.
> It's difficult to conceive of an N dimensional object in an N+m dimensional
> tabletop being acted upon by some number of gravity analogues.

This is essentially what a Hopfield net does. The setting of all the weights
produces an "energy surface" in the n-dimensional space generated by the
signal strengths of the n "units." The state of the system follows the
surface, seeking lowest energy; the surface gets "tilted" by virtue of
different inputs on some of the wires, and some of the dimensions get used as
continuously varying outputs on other wires.
I saw Hopfield demo a net with just ten units (10 op-amps, 100 potentiometers
for the "synaptic" weights) that was connected to a microphone and could
recognize the ten digits spoken into it. He claimed that it would work at
radio frequencies, if anybody could talk that fast :-)
The only trouble with Hopfield nets is that nobody but Hopfield can program
them. Hugo wants to build special-purpose hardware just to evolve
weight-settings, and I wish him luck.

> Is this at least in the right direction of what you are proposing?  Have
> you projected the dimensionality of the human brain?  That would at least
> give a baseline upon which to speculate - especially considering that we
> have enough difficulty understanding "perspective" dimension on a 2D
> painting, let alone conceive of (and articulate) dimensions higher than our
> own. (assuming the incompleteness theorem isn't expressly prohibiting it)

I'm proposing to use the better-understood (by me) hardware
content-sddressable memory (or rather simulate it on an ordinary computer) to
do a poor man's version of that, but in a way that I do know how to program,
and most importantly, that mostly programs itself by watching what's going
on. Chances are that someone really smart could rig a way to do that with a
real Hopfield net, since he invented them as associative memories in the
first place...
(J. J. Hopfield, "Neural networks and physical systems with emergent
collective computational abilities", Proceedings of the National Academy of
Sciences of the USA, vol. 79 no. 8 pp. 2554-2558, April 1982.
http://www.pnas.org/cgi/content/abstract/79/8/2554)
... but I'm not that smart :-)

--Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to