Hi Arthur, I’m happy to help.
What granularity is the raw data? How do you combine the data over 15 seconds 
into one value? If you're taking the mean over a lot of events it's possible 
you're washing out spikes in the data. Maybe taking the max over each 15 second 
period will give your model better info. We (specifically Yuwei) have been 
working on an algorithm to automatically find the best data aggregation. It’s 
still experimental code in our research repo [1], but should be straightforward 
for you to try out.

It looks like you're using anomaly likelihood correctly in NuPICModel.py, but 
the learning_amount part is odd; looks like it's never used, and redundant 
if-statements. In AnomalyTry.py I see you're using the log likelihood (good). 
Have you tried different values than 0.5 for the likelihood_cutoff? Also you 
can explicitly set the learning period by initializing the AnomalyLikelihood 
instance with claLearningPeriod [2].

[1] 
https://github.com/numenta/nupic.research/blob/master/projects/wavelet_dataAggregation/param_finder_runner.py
[2] 
https://github.com/numenta/nupic/blob/master/src/nupic/algorithms/anomaly_likelihood.py#L84

Best,
Alex

--
Alexander Lavin
Software Engineer
Numenta

Reply via email to