Thanks, Fergal. I think it's great to have additional writings, blogs,
articles, etc. We'll want to update the NuPIC wiki as well, as a community
managed resource. It would be nice to beef up the Wikipedia pages too -
there are no separate pages for CLA or NuPIC yet.

--Subutai


On Sun, Nov 24, 2013 at 5:50 AM, Fergal Byrne
<[email protected]>wrote:

> HI Subutai,
>
> I've started an article on the blog about SDRs:
> http://inbits.com/nupic-machine-intelligence/focus-sparse-distributed-representations/
>
> The idea is to give a from-scratch introduction to the concept of SDRs,
> their roots in the neuroscience, and then proceed to explain the properties
> and strengths of SDRs in both the neocortex and NuPIC. I've only done the
> first bit of that; I'll complete it (and incorporate these discussions)
> over the next few days.
>
> Meantime, any feedback or corrections would be delightedly welcome!
>
> Regards,
>
> Fergal Byrne
>
>
>
> On Sun, Nov 24, 2013 at 12:29 PM, Fergal Byrne <
> [email protected]> wrote:
>
>> Hi Subutai,
>>
>> I'm adding a page to my blog (http://inbits.com) about this - it'll be
>> live shortly. Anyone please feel free to use this (or any other material -
>> text and images - on the blog) for the Wiki.
>>
>> I'm using the blog instead of the Wiki, because I feel the latter should
>> be balanced and authoritative, whereas the blog is a good place to be
>> speculative and a little more opinionated about both the theory and NuPIC
>> itself.
>>
>> Cheers,
>>
>> Fergal Byrne
>>
>>
>> On Sun, Nov 24, 2013 at 1:31 AM, Subutai Ahmad <[email protected]>wrote:
>>
>>>
>>> As Doug mentioned in another email, the quality of the discussion on
>>> this list has been very high. We are putting together a really nice
>>> collection of theoretical results on SDR’s. Marek, thanks for starting this
>>> thread. I would like to collect these in a more organized fashion. We have
>>> an initial page on the theory below:
>>>
>>> https://github.com/numenta/nupic/wiki/Sparse-Distributed-Representations
>>>
>>> Would someone like to take a crack at including some of the results and
>>> email discussions? The ideal format for me would be a summary list of the
>>> main points, plus hopefully a link to more detailed page for each result
>>> (maybe this could just link to the email).   We could include relevant
>>> results from Kanerva in the same format.
>>>
>>> Thanks,
>>>
>>> —Subutai
>>>
>>>
>>>
>>> On Fri, Nov 22, 2013 at 6:55 AM, Marek Otahal <[email protected]>wrote:
>>>
>>>> Guys,
>>>>
>>>> I want to run some benchmarks on the CLA, one of which includes what I
>>>> called (information) capacity.
>>>>
>>>> This is #number of patterns a spatial pooler (SP) (with a fixed number
>>>> of columns) (and probably fixed number of training rounds) can distinguish.
>>>>
>>>> So assuming I have a SP with 1000 columns and 2% sparsity (=20 cols ON
>>>> at all times) and an encoder big enough to express larege range of patterns
>>>> (say scalar encoder for 0...1.000.000.000).
>>>>
>>>> The top cap is (100 choose 20) which is some crazy number of 5*10^20.
>>>> All these SDRs will be sparse, but not distributed (right??) because a
>>>> change in one bit will already be another pattern.
>>>>
>>>> So my question is, what is the "usable" capacity where all outputs are
>>>> still sparse (they all are) and distributed (=robust to noice). Is there a
>>>> percentage of bits (say 20% bits chaotic and still recognizes the pattern
>>>> still considered distributed/robust?)
>>>>
>>>>
>>>> Or is it the other way around and the SP tries to maximize this
>>>> robustnes for the given number of patterns it is presented? I if I feed it
>>>> huge number of patterns I'll pay the obvious price of reducing the border
>>>> between two patterns?
>>>>
>>>> Either way, is there a reasonable way to measure what I defined a
>>>> capacity?
>>>>
>>>> I was thinking like:
>>>>
>>>> for 10 repetitions:
>>>>    for p in patterns_to_present:
>>>>       sp.input(p)
>>>>
>>>> sp.disableLearning()
>>>> for p in patterns_to_present:
>>>>    p_mod = randomize_some_percentage_of_pattern(p, percentage)  # what
>>>> should the percentage be? see above
>>>>    if( sp.input(p) == sp.input(p_mod):
>>>>         # ok, it's same, pattern learned
>>>>
>>>>
>>>> Thanks for your replies,
>>>> Mark
>>>>
>>>>
>>>> --
>>>> Marek Otahal :o)
>>>>
>>>> _______________________________________________
>>>> nupic mailing list
>>>> [email protected]
>>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>>
>>>>
>>>
>>> _______________________________________________
>>> nupic mailing list
>>> [email protected]
>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>
>>>
>>
>>
>> --
>>
>> Fergal Byrne, Brenter IT
>>
>> <http://www.examsupport.ie>http://inbits.com - Better Living through
>> Thoughtful Technology
>>
>> e:[email protected] t:+353 83 4214179
>> Formerly of Adnet [email protected] http://www.adnet.ie
>>
>
>
>
> --
>
> Fergal Byrne, Brenter IT
>
> <http://www.examsupport.ie>http://inbits.com - Better Living through
> Thoughtful Technology
>
> e:[email protected] t:+353 83 4214179
> Formerly of Adnet [email protected] http://www.adnet.ie
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to