On Mon, Jun 15, 2015 at 1:00 AM, Matt Mahoney <[email protected]>
wrote:

> On Sat, Jun 13, 2015 at 12:52 AM, YKY (Yan King Yin, 甄景贤)
> <[email protected]> wrote:
> > But here comes a problem:  if we have 3 propositions, say
> >   P1 = yesterday rained
> >   P2 = Obama is president of US
> >    P3 = the moon is made of cheese
> > and if there exists a linear dependence among them, say:
> >    a3 P3 = a1 P1 + a2 P2
> > where a1, a2, a3 are scalars, that seems to create a relation between
> apparently unrelated sentences, and would lead to error.
>
> That's unlikely to happen in normal semantic spaces with tens of
> thousands of dimensions.
>


​I found out that a "distributive representation"​

​does not come with superposition (I don't recall where I got that idea
from).

For example, 100 neurons which take only binary (0,1) values can represent
maximally 2^100 different "states".  This is vastly bigger than the number
of states for a completely local representation, which would be 100.

But now we have no way to superimpose the states -- all the available bits
are used up.

​I have to research a bit about the idea of superposition...  to see how it
gels with distributive representations.

PS:  if we use a dimension higher than the dimension of the signal, that
representation is called "over-complete", and mathematically it's called a
"frame" (the famous example being the "Mercedes Benz"​ tri-vector "basis"
for 2-D space).
There are ways to use such representations to increase accuracy or combat
noise.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to