Hmmm...
So, I'm thinking: The human brain is wired to do a lot of abstract cognition
in terms of metaphorical maps of the environment, and these are tied in with
macro-world classical physics
This may be part of the reason we're so bad at thinking about the quantum
microworld
So: Maybe in
On the face of it, these place maps are very reminiscent of attractors as
found in formal attractor neural networks. When multiple noncorrelated
maps are stored in the same collection of neurons, this sounds like multiple
attractors being stored in the same formal neural net.
Yeap, there's
Yeap, there's well developed theories about how an autoassociate
network like CA3 could support multiple, uncorrelated attractor
maps and sustain activity once one of them was activated. The
big debate is about how they are formed.
The standard way attractors are formed in formal ANN
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, February 24, 2003 6:08 PM
Subject: RE: [agi] the Place system of the rodent hippocampus
Hmmm...
So, I'm thinking: The human brain is wired to do a lot of abstract
cognition
in terms
Yeap, there's well developed theories about how an autoassociate
network like CA3 could support multiple, uncorrelated attractor
maps and sustain activity once one of them was activated. The
big debate is about how they are formed.
The standard way attractors are formed in formal
Hi,
Using artificial rules, such as hardball winner-take-all and
synaptic weight normalization, it's doable to get ANN's to do this.
But in an autoassociative network with realistic biophysical
properties, controlling activity to prevent runaway synaptic
modification is a very large
I wrote, pertaining to problems of positive feedback causing erroneous or
uncontrollable dynamics:
The fact that similar problems occur in Novamente inference as well as in
the brain, suggests that they're general system-theoretic
problems in some
sense, perhaps occurring in any distributed
Monday, February 24, 2003, 8:24:22 PM, Ben Goertzel wrote:
BG I wrote, pertaining to problems of positive feedback causing erroneous or
BG uncontrollable dynamics:
The fact that similar problems occur in Novamente inference as well as in
the brain, suggests that they're general
Perhaps in Novamente you'll find that a certain scenario lends itself
to various different attractors of sets-of-truth-values, and that
shaking it up and finding new attractors (and comparing them to the
old ones) could be valuable...
--
Cliff
yah -- we have conceived a mechanism of