Thanks, but so much praise is bit premature as I am currently struggling with the network demo.

RS


Am 14.07.2015 um 18:38 schrieb cogmission (David Ray):
Hey Ralf!

CONGRATULATIONS!!! I didn't even realize you had finished htm.JavaScript?

Nice going buddy!



On Tue, Jul 14, 2015 at 11:23 AM, John Blackburn <[email protected] <mailto:[email protected]>> wrote:

    Looks like it might be time to run doxygen again! Last run in May 19.

    John.

    On Tue, Jul 14, 2015 at 5:14 PM, cogmission (David Ray)
    <[email protected] <mailto:[email protected]>>
    wrote:
    > Hey John,
    >
    > Nice self-sufficient researching! I like that in ya' !!!
    >
    > Anyway, yes that last (stripNeverLearned) parameter was recently
    removed
    > last month. The file I gave you is older than that...
    >
    > Remember, NuPIC is ever evolving, and it is still technically
    "pre-release"!
    >
    > ;-)
    >
    >
    >
    > On Tue, Jul 14, 2015 at 11:09 AM, John Blackburn
    > <[email protected] <mailto:[email protected]>>
    wrote:
    >>
    >> Thanks, Ralf,
    >>
    >> Actually that reminds me, David Ray kindly sent me the QuickTest.py
    >> example in Python so I just tried that. However, I ran into another
    >> problem: there seems to be some confusion about how many parameters
    >> sp.compute() takes (Spatial Pooler). In QuickTest.py the code reads
    >>
    >> sp.compute(encoding, True, output, False)
    >>
    >> However, on Github sp.compute takes only 3 parameters (apart
    from self):
    >>
    >>
    >>
    
https://github.com/numenta/nupic/blob/master/nupic/research/spatial_pooler.py#L658
    >>
    >> So this causes a crash. I notice on the API docs the 4th
    parameter is
    >> indeed mentioned:
    >>
    >>
    >>
    
http://numenta.org/docs/nupic/classnupic_1_1research_1_1spatial__pooler_1_1_spatial_pooler.html#aaa2084b96999fb1734fd2f330bfa01a6
    >>
    >> So I guess the 4th arg was recently removed. Pretty confusing!
    >>
    >> Can anyone shed light on this mystery?
    >>
    >> John.
    >>
    >> On Tue, Jul 14, 2015 at 12:25 PM, Ralf Seliger <[email protected]
    <mailto:[email protected]>> wrote:
    >> > Hey John,
    >> >
    >> > why don't you try the QuickTest example in htm.java
    >> > (https://github.com/numenta/htm.java) or htm.JavaScript
    >> > (https://github.com/nupic-community/htm.JavaScript)? It
    involves the new
    >> > temporal memory, and stepping through the code with a
    debugger you can
    >> > easily study the inner workings of th algorithm.
    >> >
    >> > Regards, RS
    >> >
    >> >
    >> > Am 14.07.2015 um 11:39 schrieb John Blackburn:
    >> >>
    >> >> Thanks, Chetan,
    >> >>
    >> >> Any tutorials, examples of how to use temporal_memory.py?
    The nice
    >> >> thing about old TP is it has an example: hello_tp.py.
    >> >>
    >> >> John.
    >> >>
    >> >> On Mon, Jul 13, 2015 at 7:55 PM, Chetan Surpur
    <[email protected] <mailto:[email protected]>>
    >> >> wrote:
    >> >>>
    >> >>> Hi John,
    >> >>>
    >> >>> The TP is now called "Temporal Memory", and there's a new
    >> >>> implementation
    >> >>> of
    >> >>> it in NuPIC [1]. Please use this latest version instead,
    and let us
    >> >>> know
    >> >>> if
    >> >>> you still find issues with the results.
    >> >>>
    >> >>> [1]
    >> >>>
    >> >>>
    >> >>>
    
https://github.com/numenta/nupic/blob/master/nupic/research/temporal_memory.py
    >> >>>
    >> >>> Thanks,
    >> >>> Chetan
    >> >>>
    >> >>> On Jul 13, 2015, at 4:44 AM, John Blackburn
    >> >>> <[email protected]
    <mailto:[email protected]>>
    >> >>> wrote:
    >> >>>
    >> >>> Dear All
    >> >>>
    >> >>> I'm trying to use the temporal pooler (TP) directly as I
    want to get
    >> >>> into the details of how Nupic works (rather than high level
    OPF etc)
    >> >>>
    >> >>> Having trained the TP I used this code to get some predictions:
    >> >>>
    >> >>> for j in range(10):
    >> >>>     x=2*math.pi/100*j
    >> >>>     y=math.sin(x)
    >> >>>
    >> >>>     print "Time step:",j
    >> >>>
    >> >>>     for k in range(nIntervals):
    >> >>>         if y>=ybot[k] and y<ytop[k]:
    >> >>>             print "input=",x,y,k,rep[k,:]
    >> >>>
    >> >>> tp.compute(rep[k,:],enableLearn=False,computeInfOutput=True)
    >> >>>  tp.printStates(printPrevious = False, printLearnState =
    >> >>> False)
    >> >>>             break
    >> >>>
    >> >>>
    >> >>> Here is the result I got:
    >> >>>
    >> >>> Time step: 0
    >> >>> input= 0.0 0.0 9 [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
    >> >>>
    >> >>> Inference Active state
    >> >>> 0000000001 0000000000
    >> >>> 0000000000 0000000000
    >> >>> Inference Predicted state
    >> >>> 0000000000 0000000000
    >> >>> 0000000001 0000000000
    >> >>> Time step: 1
    >> >>> input= 0.0628318530718 0.0627905195293 10 [0 0 0 0 0 0 0 0
    0 0 1 0 0 0
    >> >>> 0 0 0 0 0 0]
    >> >>>
    >> >>> Inference Active state
    >> >>> 0000000000 1000000000
    >> >>> 0000000000 0000000000
    >> >>> Inference Predicted state
    >> >>> 0000000000 0000000000
    >> >>> 0000000001 0000000000
    >> >>> Time step: 2
    >> >>> input= 0.125663706144 0.125333233564 11 [0 0 0 0 0 0 0 0 0
    0 0 1 0 0 0
    >> >>> 0 0 0 0 0]
    >> >>>
    >> >>> Inference Active state
    >> >>> 0000000000 0100000000
    >> >>> 0000000000 0100000000
    >> >>> Inference Predicted state
    >> >>> 0000000000 0000000000
    >> >>> 0000000000 1110000000
    >> >>> Time step: 3
    >> >>> input= 0.188495559215 0.187381314586 11 [0 0 0 0 0 0 0 0 0
    0 0 1 0 0 0
    >> >>> 0 0 0 0 0]
    >> >>>
    >> >>> Inference Active state
    >> >>> 0000000000 0000000000
    >> >>> 0000000000 0100000000
    >> >>> Inference Predicted state
    >> >>> 0000000000 0000000000
    >> >>> 0000000000 1110000000
    >> >>>
    >> >>> You can see that in time step 3, one cell (12th column) is
    shown as
    >> >>> being both in the active and predictive state, which I
    though was
    >> >>> impossible. (its inference active state is 1 and its inference
    >> >>> predicated state is 1)
    >> >>>
    >> >>> Also if you look at time step 0, only 1 cell is in the
    predictive
    >> >>> state. However, the input that comes in at time step 1
    activates the
    >> >>> colum to the right of this cell (the 11th slot is "1") so I
    would
    >> >>> expect the 11th column to have both cells active, the
    "unexpected
    >> >>> input state" but this does not happen.
    >> >>>
    >> >>> Can anyone explain this?
    >> >>>
    >> >>> John.
    >> >>>
    >> >>>
    >> >
    >> >
    >>
    >
    >
    >
    > --
    > With kind regards,
    >
    > David Ray
    > Java Solutions Architect
    >
    > Cortical.io
    > Sponsor of:  HTM.java
    >
    > [email protected] <mailto:[email protected]>
    > http://cortical.io




--
/With kind regards,/
David Ray
Java Solutions Architect
*Cortical.io <http://cortical.io/>*
Sponsor of: HTM.java <https://github.com/numenta/htm.java>
[email protected] <mailto:[email protected]>
http://cortical.io <http://cortical.io/>

Reply via email to