Vlad,

They could be used much like normal nodes, except that a given set of basic
nodes that form a conceptual node would be auto-associative within their own
population, and they would have some of the benefits of redundancy,
robustness, resistance to noise, and gradual forgetting, that I mentioned
earlier.

Robert Hecht-Nielsens "Mechanization of Cognition" particularly in Appendix
section 3.A.3 and 3.A.4 gives a good description of how a particular type of
neural assemblies can be used for semantic representation and imagination.
This article text used to be available on the web, but I don't see it
anymore.  It was published as chapter 3 in Bar-Cohen, Y. [Ed.] Biomimetics:
Biologically Inspired Technologies, CRC Press, Boca Raton, FL (2006).

Ed Porter

-----Original Message-----
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 16, 2008 3:40 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

On Thu, Oct 16, 2008 at 7:01 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>
> The answer to this question would provide a rough indication of the
> representational capacity of using node assemblies to represent concepts
vs
> using separate individual node, for a given number of nodes.  Some people
> claim the number of cell assemblies that can be created with say a billion
> nodes that can distinctly represent different concepts far exceeds the
> number of nodes.  Clearly the number of possible subsets of say size 10K
out
> of 1G nodes is a combinatorial number much larger than the number of
> particles in the observable universe, but I have never seen any showing of
> how many such subsets can be created that would have a sufficiently low
> degree of overlap with all other subsets as to support reliable
> representation for separate individual concepts.
>
> If the number such node subsets that can be created with sufficiently low
> overlap with any other node to clearly and reliably represent individual
> concepts is much, much larger than the number of nodes, it means cell
> assemblies might be extremely valuable to creating powerful AGI's.  If
not,
> not.
>

Ed,

Clearly, the answer is huge (you can just play with a simple special
case to get a feel of the lower bound), and it hardly matters how
huge. It's much more important what are you going to do with it, what
does it mean to you. How do these assemblies form, what do they do,
how they learn, how they react to input, how do you make them
implement a control algorithm, how do you direct them to do what you
want. And what does it matter how many of them are potentially there,
if you are calculating this estimate based on constraints divorced
from the algorithms, which should be the source of any constraints in
the first place, and which would probably make the whole concept of
separate assemblies meaningless.

I found cell assembly language not very helpful, although the idea of
representing many patterns by few nodes is important. For example, the
state of a set of nodes (cells) can be regarded as a point in Hamming
space (space of binary vectors), and the dynamic of this set of nodes
as operation on this space, taking the state to a new point depending
on a previous point. The operation works in such a way that many
points are mapped to one point, thus trajectory of the state is stable
(so much for redundancy). The length of this trajectory before it
loops is the number of different states, which could be on the order
of powerset. Some of the nodes may be controlled by external input,
shifting the state, interfering with trajectory. Since points in
neighborhoods are attracted to the trajectory, it reduces the volume
that trajectory can span accordingly. States along a trajectory
enumerate temporal code that can be used to learn temporal codes (by
changing the direction of the trajectory, or attracted neighborhoods),
and by extension any other codes. Multiple separate trajectories are
separate states, which can mix with each other and establish
transitions conditional on external input, thus creating combined
trajectories. And so on. I'll work my way up to this in maybe a couple
of months on the blog, after sequences on fluid representation,
information and goals, and inference surface.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to