Hi Weiru,

Glad to hear that the code works. TemporalMemory() is the current
version. The best way to learn it is to read this recent paper [1]. It
contains both biological insights/evidences, and a detailed description of
the algorithm (section 5).

TP10X2() is a version of the algorithm that is used in some of the Numenta
applications. It contains several non-biological optimizations to speedup
learning. I don't know a good documentation for this algorithm. As we move
forward, we probably will to replace it with TemporalMemory(). I suggest to
stick with TemporalMemory for learning and research purposes.


[1] http://arxiv.org/abs/1511.00083

--
Yuwei Cui

Research Engineer, Numenta Inc.


On Sun, Nov 8, 2015 at 9:17 PM, Weiru Zeng <[email protected]> wrote:

> HI Yuwei Cui
>
> Thanks for your response.
>
> I run your code hello_tm.py, just as you have said, it get the right
> prediction. Your code create the Temporal Pooler by the function TM(), and
> in the file hello_tp.py, the Temporal Pooler is created by the function tp
> ().
>
> So I want to konw what's the differents of them, and are there some
> documents which introduce the function TemporalMenory() and TP10X2(),
> include what's mean of the parameter of TM. how to set the parameter of the
> TM,
>
> Hope for your response.
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yuwei Cui";<[email protected]>;
> *发送时间:* 2015年11月8日(星期天) 凌晨1:56
> *收件人:* "Weiru Zeng"<[email protected]>;
> *主题:* Re: Questiona about hello_tp.py
>
> Hi Weiru,
>
> I tried to run the code with your sequence "ABCADE", and I did get correct
> prediction of "D" after the second "A". The output of the code is copied
> below.
>
> I wonder whether you are using the latest version. The code should be
> named as "hello_tm.py" instead of "hello_tp.py" now. I also attached my
> code using your sequence ABCADE with this email here.
>
> Bests,
>
> --
> Yuwei Cui
>
> Research Engineer, Numenta Inc.
>
>
> -------- A -----------
> Raw input vector : 1111111111 0000000000 0000000000 0000000000 0000000000
>
> All the active and predicted cells:
> active cells set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
> 16, 17, 18, 19])
> predictive cells set([33, 34, 67, 36, 69, 38, 71, 31, 73, 75, 77, 78, 61,
> 20, 22, 24, 27, 29, 62, 65])
> active segments set([0, 33, 2, 3, 4, 37, 6, 7, 8, 9, 32, 39, 34, 35, 1,
> 36, 38, 5, 30, 31])
> winnercellsset([0, 2, 5, 6, 8, 11, 13, 14, 16, 19])
> Active columns:    1111111111 0000000000 0000000000 0000000000 0000000000
> Predicted columns: 0000000000 1111111111 0000000000 1111111111 0000000000
>
>
> -------- B -----------
> Raw input vector : 0000000000 1111111111 0000000000 0000000000 0000000000
>
> All the active and predicted cells:
> active cells set([33, 34, 36, 38, 20, 22, 24, 27, 29, 31])
> predictive cells set([41, 43, 44, 47, 48, 50, 53, 54, 56, 59])
> active segments set([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
> winnercellsset([33, 34, 36, 38, 20, 22, 24, 27, 29, 31])
> Active columns:    0000000000 1111111111 0000000000 0000000000 0000000000
> Predicted columns: 0000000000 0000000000 1111111111 0000000000 0000000000
>
>
> -------- C -----------
> Raw input vector : 0000000000 0000000000 1111111111 0000000000 0000000000
>
> All the active and predicted cells:
> active cells set([41, 43, 44, 47, 48, 50, 53, 54, 56, 59])
> predictive cells set([1, 3, 4, 7, 9, 10, 12, 15, 17, 18])
> active segments set([20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
> winnercellsset([41, 43, 44, 47, 48, 50, 53, 54, 56, 59])
> Active columns:    0000000000 0000000000 1111111111 0000000000 0000000000
> Predicted columns: 1111111111 0000000000 0000000000 0000000000 0000000000
>
>
> -------- A -----------
> Raw input vector : 1111111111 0000000000 0000000000 0000000000 0000000000
>
> All the active and predicted cells:
> active cells set([1, 3, 4, 7, 9, 10, 12, 15, 17, 18])
> predictive cells set([65, 67, 69, 71, 73, 75, 77, 78, 61, 62])
> active segments set([32, 33, 34, 35, 36, 37, 38, 39, 30, 31])
> winnercellsset([1, 3, 4, 7, 9, 10, 12, 15, 17, 18])
> Active columns:    1111111111 0000000000 0000000000 0000000000 0000000000
> Predicted columns: 0000000000 0000000000 0000000000 1111111111 0000000000
>
>
> -------- D -----------
> Raw input vector : 0000000000 0000000000 0000000000 1111111111 0000000000
>
> All the active and predicted cells:
> active cells set([65, 67, 69, 71, 73, 75, 77, 78, 61, 62])
> predictive cells set([97, 99, 81, 82, 84, 86, 89, 91, 93, 94])
> active segments set([40, 41, 42, 43, 44, 45, 46, 47, 48, 49])
> winnercellsset([65, 67, 69, 71, 73, 75, 77, 78, 61, 62])
> Active columns:    0000000000 0000000000 0000000000 1111111111 0000000000
> Predicted columns: 0000000000 0000000000 0000000000 0000000000 1111111111
>
>
> -------- E -----------
> Raw input vector : 0000000000 0000000000 0000000000 0000000000 1111111111
>
> All the active and predicted cells:
> active cells set([97, 99, 81, 82, 84, 86, 89, 91, 93, 94])
> predictive cells set([])
> active segments set([])
> winnercellsset([97, 99, 81, 82, 84, 86, 89, 91, 93, 94])
> Active columns:    0000000000 0000000000 0000000000 0000000000 1111111111
> Predicted columns: 0000000000 0000000000 0000000000 0000000000 0000000000
>
> --
> Yuwei Cui
>
> Research Engineer, Numenta Inc.
>
> Homepage: http://terpconnect.umd.edu/~ywcui/
>
> LinkedIn: https://www.linkedin.com/pub/yuwei-cui/1b/400/866
>
> On Sat, Nov 7, 2015 at 2:34 AM, Weiru Zeng <[email protected]> wrote:
>
>> Hello Nupic:
>> Today I ran the hello_tp.py and it performed well. To test the tp's
>> capbility of relating the context, I changed the sequence "ABCDE" to
>> "ABCADE"(I didn't change the parameter of the TP), then ran it . You can
>> see the output of this procedure below. the most important part is the red
>> part, you are easily aware of that the prediction of A is not D, but the
>> code: 0000000000 1111111111 0000000000 1111111111 0000000000 .
>> so I want to know that how can I make the prediction of this procedure to
>> be more accurate, in other words, how to make the tp to relate the
>> context("ABCA"). should I change some parameter of the TP() function or
>> some others?
>> Thank you in advance!!
>>
>>
>> /usr/bin/python2.7 /home/megart/下载/mynupic/test/hello_tp.py
>>
>> This program shows how to access the Temporal Pooler directly by
>> demonstrating
>> how to create a TP instance, train it with vectors, get predictions, and
>> inspect
>> the state.
>>
>> The code here runs a very simple version of sequence learning, with one
>> cell per column. The TP is trained with the simple sequence A->B->C->D->E
>>
>> HOMEWORK: once you have understood exactly what is going on here, try
>> changing
>> cellsPerColumn to 4. What is the difference between once cell per column
>> and 4
>> cells per column?
>>
>> PLEASE READ THROUGH THE CODE COMMENTS - THEY EXPLAIN THE OUTPUT IN DETAIL
>>
>>
>>
>>
>> -------- A -----------
>> Raw input vector
>> 1111111111 0000000000 0000000000 0000000000 0000000000
>>
>> All the active and predicted cells:
>>
>> Inference Active state
>> 1111111111 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 1111111111 0000000000 0000000000 0000000000
>>
>>
>> The following columns are predicted by the temporal pooler. This
>> should correspond to columns in the *next* item in the sequence.
>> [10 11 12 13 14 15 16 17 18 19]
>>
>>
>> -------- B -----------
>> Raw input vector
>> 0000000000 1111111111 0000000000 0000000000 0000000000
>>
>> All the active and predicted cells:
>>
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 1111111111 0000000000 0000000000 0000000000
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 1111111111 0000000000 0000000000
>>
>>
>> The following columns are predicted by the temporal pooler. This
>> should correspond to columns in the *next* item in the sequence.
>> [20 21 22 23 24 25 26 27 28 29]
>>
>>
>> -------- C -----------
>> Raw input vector
>> 0000000000 0000000000 1111111111 0000000000 0000000000
>>
>> All the active and predicted cells:
>>
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 1111111111 0000000000 0000000000
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 1111111111 0000000000 0000000000 0000000000 0000000000
>>
>>
>> The following columns are predicted by the temporal pooler. This
>> should correspond to columns in the *next* item in the sequence.
>> [0 1 2 3 4 5 6 7 8 9]
>>
>>
>> -------- A -----------
>> Raw input vector
>> 1111111111 0000000000 0000000000 0000000000 0000000000
>>
>> All the active and predicted cells:
>>
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 1111111111 0000000000 0000000000 0000000000 0000000000
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 1111111111 0000000000 1111111111 0000000000
>>
>>
>> The following columns are predicted by the temporal pooler. This
>> should correspond to columns in the *next* item in the sequence.
>> [10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39]
>>
>>
>> -------- D -----------
>> Raw input vector
>> 0000000000 0000000000 0000000000 1111111111 0000000000
>>
>> All the active and predicted cells:
>>
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 0000000000 1111111111 0000000000
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 0000000000 0000000000 1111111111
>>
>>
>> The following columns are predicted by the temporal pooler. This
>> should correspond to columns in the *next* item in the sequence.
>> [40 41 42 43 44 45 46 47 48 49]
>>
>>
>> -------- E -----------
>> Raw input vector
>> 0000000000 0000000000 0000000000 0000000000 1111111111
>>
>> All the active and predicted cells:
>>
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 0000000000 0000000000 1111111111
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>> 0000000000 0000000000 0000000000 0000000000 0000000000
>>
>>
>> The following columns are predicted by the temporal pooler. This
>> should correspond to columns in the *next* item in the sequence.
>> []
>>
>> Process finished with exit code 0
>>
>> Weiru Zeng
>>
>
>

Reply via email to