Hi Weiru,

I tried to run the code with your sequence "ABCADE", and I did get correct
prediction of "D" after the second "A". The output of the code is copied
below.

I wonder whether you are using the latest version. The code should be named
as "hello_tm.py" instead of "hello_tp.py" now. I also attached my code
using your sequence ABCADE with this email here.

Bests,

--
Yuwei Cui

Research Engineer, Numenta Inc.


-------- A -----------
Raw input vector : 1111111111 0000000000 0000000000 0000000000 0000000000

All the active and predicted cells:
active cells set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19])
predictive cells set([33, 34, 67, 36, 69, 38, 71, 31, 73, 75, 77, 78, 61,
20, 22, 24, 27, 29, 62, 65])
active segments set([0, 33, 2, 3, 4, 37, 6, 7, 8, 9, 32, 39, 34, 35, 1, 36,
38, 5, 30, 31])
winnercellsset([0, 2, 5, 6, 8, 11, 13, 14, 16, 19])
Active columns:    1111111111 0000000000 0000000000 0000000000 0000000000
Predicted columns: 0000000000 1111111111 0000000000 1111111111 0000000000


-------- B -----------
Raw input vector : 0000000000 1111111111 0000000000 0000000000 0000000000

All the active and predicted cells:
active cells set([33, 34, 36, 38, 20, 22, 24, 27, 29, 31])
predictive cells set([41, 43, 44, 47, 48, 50, 53, 54, 56, 59])
active segments set([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
winnercellsset([33, 34, 36, 38, 20, 22, 24, 27, 29, 31])
Active columns:    0000000000 1111111111 0000000000 0000000000 0000000000
Predicted columns: 0000000000 0000000000 1111111111 0000000000 0000000000


-------- C -----------
Raw input vector : 0000000000 0000000000 1111111111 0000000000 0000000000

All the active and predicted cells:
active cells set([41, 43, 44, 47, 48, 50, 53, 54, 56, 59])
predictive cells set([1, 3, 4, 7, 9, 10, 12, 15, 17, 18])
active segments set([20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
winnercellsset([41, 43, 44, 47, 48, 50, 53, 54, 56, 59])
Active columns:    0000000000 0000000000 1111111111 0000000000 0000000000
Predicted columns: 1111111111 0000000000 0000000000 0000000000 0000000000


-------- A -----------
Raw input vector : 1111111111 0000000000 0000000000 0000000000 0000000000

All the active and predicted cells:
active cells set([1, 3, 4, 7, 9, 10, 12, 15, 17, 18])
predictive cells set([65, 67, 69, 71, 73, 75, 77, 78, 61, 62])
active segments set([32, 33, 34, 35, 36, 37, 38, 39, 30, 31])
winnercellsset([1, 3, 4, 7, 9, 10, 12, 15, 17, 18])
Active columns:    1111111111 0000000000 0000000000 0000000000 0000000000
Predicted columns: 0000000000 0000000000 0000000000 1111111111 0000000000


-------- D -----------
Raw input vector : 0000000000 0000000000 0000000000 1111111111 0000000000

All the active and predicted cells:
active cells set([65, 67, 69, 71, 73, 75, 77, 78, 61, 62])
predictive cells set([97, 99, 81, 82, 84, 86, 89, 91, 93, 94])
active segments set([40, 41, 42, 43, 44, 45, 46, 47, 48, 49])
winnercellsset([65, 67, 69, 71, 73, 75, 77, 78, 61, 62])
Active columns:    0000000000 0000000000 0000000000 1111111111 0000000000
Predicted columns: 0000000000 0000000000 0000000000 0000000000 1111111111


-------- E -----------
Raw input vector : 0000000000 0000000000 0000000000 0000000000 1111111111

All the active and predicted cells:
active cells set([97, 99, 81, 82, 84, 86, 89, 91, 93, 94])
predictive cells set([])
active segments set([])
winnercellsset([97, 99, 81, 82, 84, 86, 89, 91, 93, 94])
Active columns:    0000000000 0000000000 0000000000 0000000000 1111111111
Predicted columns: 0000000000 0000000000 0000000000 0000000000 0000000000

--
Yuwei Cui

Research Engineer, Numenta Inc.

Homepage: http://terpconnect.umd.edu/~ywcui/

LinkedIn: https://www.linkedin.com/pub/yuwei-cui/1b/400/866

On Sat, Nov 7, 2015 at 2:34 AM, Weiru Zeng <[email protected]> wrote:

> Hello Nupic:
> Today I ran the hello_tp.py and it performed well. To test the tp's
> capbility of relating the context, I changed the sequence "ABCDE" to
> "ABCADE"(I didn't change the parameter of the TP), then ran it . You can
> see the output of this procedure below. the most important part is the red
> part, you are easily aware of that the prediction of A is not D, but the
> code: 0000000000 1111111111 0000000000 1111111111 0000000000 .
> so I want to know that how can I make the prediction of this procedure to
> be more accurate, in other words, how to make the tp to relate the
> context("ABCA"). should I change some parameter of the TP() function or
> some others?
> Thank you in advance!!
>
>
> /usr/bin/python2.7 /home/megart/下载/mynupic/test/hello_tp.py
>
> This program shows how to access the Temporal Pooler directly by
> demonstrating
> how to create a TP instance, train it with vectors, get predictions, and
> inspect
> the state.
>
> The code here runs a very simple version of sequence learning, with one
> cell per column. The TP is trained with the simple sequence A->B->C->D->E
>
> HOMEWORK: once you have understood exactly what is going on here, try
> changing
> cellsPerColumn to 4. What is the difference between once cell per column
> and 4
> cells per column?
>
> PLEASE READ THROUGH THE CODE COMMENTS - THEY EXPLAIN THE OUTPUT IN DETAIL
>
>
>
>
> -------- A -----------
> Raw input vector
> 1111111111 0000000000 0000000000 0000000000 0000000000
>
> All the active and predicted cells:
>
> Inference Active state
> 1111111111 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 1111111111 0000000000 0000000000 0000000000
>
>
> The following columns are predicted by the temporal pooler. This
> should correspond to columns in the *next* item in the sequence.
> [10 11 12 13 14 15 16 17 18 19]
>
>
> -------- B -----------
> Raw input vector
> 0000000000 1111111111 0000000000 0000000000 0000000000
>
> All the active and predicted cells:
>
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 1111111111 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 1111111111 0000000000 0000000000
>
>
> The following columns are predicted by the temporal pooler. This
> should correspond to columns in the *next* item in the sequence.
> [20 21 22 23 24 25 26 27 28 29]
>
>
> -------- C -----------
> Raw input vector
> 0000000000 0000000000 1111111111 0000000000 0000000000
>
> All the active and predicted cells:
>
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 1111111111 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 1111111111 0000000000 0000000000 0000000000 0000000000
>
>
> The following columns are predicted by the temporal pooler. This
> should correspond to columns in the *next* item in the sequence.
> [0 1 2 3 4 5 6 7 8 9]
>
>
> -------- A -----------
> Raw input vector
> 1111111111 0000000000 0000000000 0000000000 0000000000
>
> All the active and predicted cells:
>
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 1111111111 0000000000 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 1111111111 0000000000 1111111111 0000000000
>
>
> The following columns are predicted by the temporal pooler. This
> should correspond to columns in the *next* item in the sequence.
> [10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39]
>
>
> -------- D -----------
> Raw input vector
> 0000000000 0000000000 0000000000 1111111111 0000000000
>
> All the active and predicted cells:
>
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 0000000000 1111111111 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 0000000000 0000000000 1111111111
>
>
> The following columns are predicted by the temporal pooler. This
> should correspond to columns in the *next* item in the sequence.
> [40 41 42 43 44 45 46 47 48 49]
>
>
> -------- E -----------
> Raw input vector
> 0000000000 0000000000 0000000000 0000000000 1111111111
>
> All the active and predicted cells:
>
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 0000000000 0000000000 1111111111
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> 0000000000 0000000000 0000000000 0000000000 0000000000
>
>
> The following columns are predicted by the temporal pooler. This
> should correspond to columns in the *next* item in the sequence.
> []
>
> Process finished with exit code 0
>
> Weiru Zeng
>
#!/usr/bin/env python
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc.  Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program.  If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------


print """
This program shows how to access the Temporal Memory directly by demonstrating
how to create a TM instance, train it with vectors, get predictions, and
inspect the state.

The code here runs a very simple version of sequence learning, with one
cell per column. The TP is trained with the simple sequence A->B->C->D->E

HOMEWORK: once you have understood exactly what is going on here, try changing
cellsPerColumn to 4. What is the difference between once cell per column and 4
cells per column?

PLEASE READ THROUGH THE CODE COMMENTS - THEY EXPLAIN THE OUTPUT IN DETAIL

"""

# Can't live without numpy
import numpy

# izip for maximum efficiency
from itertools import izip as zip, count

# Python implementation of Temporal Memory

from nupic.research.temporal_memory import TemporalMemory as TM

# FastTemporalMemory Uses C++ Connections data structure for optimization.
# Uncomment the below line to use FTM and create FTM instance instead of TM.

# from nupic.research.fast_temporal_memory import FastTemporalMemory as FTM


# Utility routine for printing the input vector
def formatRow(x):
  s = ''
  for c in range(len(x)):
    if c > 0 and c % 10 == 0:
      s += ' '
    s += str(x[c])
  s += ' '
  return s


# Step 1: create Temporal Pooler instance with appropriate parameters

tm = TM(columnDimensions = (50,),
        cellsPerColumn=2,
        initialPermanence=0.5,
        connectedPermanence=0.5,
        minThreshold=10,
        maxNewSynapseCount=20,
        permanenceIncrement=0.1,
        permanenceDecrement=0.0,
        activationThreshold=8,
        )


# Step 2: create input vectors to feed to the temporal pooler. Each input vector
# must be numberOfCols wide. Here we create a simple sequence of 5 vectors
# representing the sequence A -> B -> C -> D -> E
x = numpy.zeros((6, tm.numberOfColumns()), dtype="uint32")
x[0, 0:10] = 1    # Input SDR representing "A", corresponding to columns 0-9
x[1, 10:20] = 1   # Input SDR representing "B", corresponding to columns 10-19
x[2, 20:30] = 1   # Input SDR representing "C", corresponding to columns 20-29
x[3, 0:10] = 1    # Input SDR representing "A", corresponding to columns 0-9
x[4, 30:40] = 1   # Input SDR representing "D", corresponding to columns 30-39
x[5, 40:50] = 1   # Input SDR representing "E", corresponding to columns 40-49


# Step 3: send this simple sequence to the temporal memory for learning
# We repeat the sequence 10 times
for i in range(10):

  # Send each letter in the sequence in order
  for j in range(6):
    activeColumns = set([i for i, j in zip(count(), x[j]) if j == 1])

    # The compute method performs one step of learning and/or inference. Note:
    # here we just perform learning but you can perform prediction/inference and
    # learning in the same step if you want (online learning).
    tm.compute(activeColumns, learn = True)

    # The following print statements can be ignored.
    # Useful for tracing internal states
    print("active cells " + str(tm.activeCells))
    print("predictive cells " + str(tm.predictiveCells))
    print("winner cells " + str(tm.winnerCells))
    print("active segments " + str(tm.activeSegments))

  # The reset command tells the TP that a sequence just ended and essentially
  # zeros out all the states. It is not strictly necessary but it's a bit
  # messier without resets, and the TP learns quicker with resets.
  tm.reset()


#######################################################################
#
# Step 3: send the same sequence of vectors and look at predictions made by
# temporal memory
for j in range(6):
  print "\n\n--------","ABCADE"[j],"-----------"
  print "Raw input vector : " + formatRow(x[j])
  activeColumns = set([i for i, j in zip(count(), x[j]) if j == 1])
  # Send each vector to the TM, with learning turned off
  tm.compute(activeColumns, learn = False)
  
  # The following print statements prints out the active cells, predictive
  # cells, active segments and winner cells. 
  #
  # What you should notice is that the columns where active state is 1
  # represent the SDR for the current input pattern and the columns where
  # predicted state is 1 represent the SDR for the next expected pattern
  print "\nAll the active and predicted cells:"
  
  print("active cells " + str(tm.activeCells))
  print("predictive cells "+ str(tm.predictiveCells))
  print("active segments "+ str(tm.activeSegments))

  print("winnercells" + str(tm.winnerCells))

  activeColumnsIndeces = [tm.columnForCell(i) for i in tm.activeCells]
  predictedColumnIndeces = [tm.columnForCell(i) for i in tm.predictiveCells]
  
  
  # Reconstructing the active and inactive columns with 1 as active and 0 as 
  # inactive representation.

  actColState = ['1' if i in activeColumnsIndeces else '0' for i in range(tm.numberOfColumns())]
  actColStr = ("".join(actColState))
  predColState = ['1' if i in predictedColumnIndeces else '0' for i in range(tm.numberOfColumns())]
  predColStr = ("".join(predColState))

  # For convenience the cells are grouped
  # 10 at a time. When there are multiple cells per column the printout
  # is arranged so the cells in a column are stacked together
  print "Active columns:    " + formatRow(actColStr)
  print "Predicted columns: " + formatRow(predColStr)

  # predictedCells[c][i] represents the state of the i'th cell in the c'th
  # column. To see if a column is predicted, we can simply take the OR
  # across all the cells in that column. In numpy we can do this by taking 
  # the max along axis 1.

Reply via email to