Hi guys,

Thanks for the input. I was a bit worried why nobody had tried to answer my 
question. Here are my thoughts on your responses:

Julie says:
[..], I suspect that to account for noise in the duration of a particular 
pattern, you would want to have a coarser grained aggregation of your input 
values when swarming. There are aggregation params in the model (although I 
can't remember ATM precisely what they're called). As long as the aggregation 
is mostly covering the noise, I think this would work.

This is an interesting solution. I will think about this, but currently I see 
the problem of finding the right aggregation value. The ‘noise’ may lead to a 
halt in the sequence of 10 steps (in which case aggregating 10 steps would get 
rid of all noise) or of 1 step (in which case aggregating 10 steps would lead 
to a loss of information). I am still thinking of the input as a speech signal 
and having faster and slower speakers.

Sergey says:
My understanding is that variability in what you refer to as "noise in the time 
domain" is handled not by a region, but by the hierarchy.

This was my thought as well. The problem is that the current NuPIC models don’t 
feature hierarchy. CLAModel for example only features a SP, TP and classifier. 
I have been meaning to find out how to build a proper network with many layers 
of CLAModel instances, but haven’t had the time. I am wondering about how the 
current NuPIC project would deal with it or whether it can do it at all.

BTW, if someone knows how to build larger networks or is interested in building 
one with me, let me know ;)

Regards,
Manuel



From: nupic [mailto:[email protected]] On Behalf Of Sergey 
Alexashenko
Sent: 16 April 2014 22:08
To: NuPIC general mailing list.
Subject: Re: [nupic-discuss] 'Noise' in time

Hello all,

My understanding is that variability in what you refer to as "noise in the time 
domain" is handled not by a region, but by the hierarchy. Consider the 
following example:

You are at a Gnarls Barkley concert. He is performing "Crazy". You know the 
song very well and are familiar both with the lyrics and the pace of it. 
However, you have learned the song by listening to the same recording over and 
over again.

Gnarls Barkley sings "I remember when, I remember, I remember when" and you 
know that the sound to come is "I lost my mind". You also know when that sound 
is supposed to enter your hearing. However, this being a live show, Gnarls 
Barkley makes a pause slightly longer than the one you remember from the 
recording. Here's what happens in your brain:
On some lower auditory level (A1?), the prediction of hearing the exact sounds 
you were expecting is not correct. But on a layer slightly higher in the 
hierarchy, there is awareness that you are at a Gnarls Barkley concert, that he 
is singing Crazy and that the next words are "I lost my mind". As such, the 
higher region, to which the exception is escalated, sends down the same sensory 
prediction it was expecting a moment ago. It will do so for a period of time 
until the anomaly is so large that you might suspect that Gnarls Barkley has 
stopped singing for some reason. As such, when Gnarls Barkley ends his pause, 
the lower region will get the sensory input and the prediction from a higher 
region that match.

Cheers,

Sergey

P.S. Sorry if this response gets sent twice, I used an email address, which is 
not subscribed to the list in my first response.

On Wed, Apr 16, 2014 at 12:40 PM, Julie Pitt 
<[email protected]<mailto:[email protected]>> wrote:
A NuPIC novice myself, I suspect that to account for noise in the duration of a 
particular pattern, you would want to have a coarser grained aggregation of 
your input values when swarming. There are aggregation params in the model 
(although I can't remember ATM precisely what they're called). As long as the 
aggregation is mostly covering the noise, I think this would work.

Interestingly, in How to Create a Mind, Ray Kurzweil discusses the problem of 
variation in duration of an input pattern, using speech sounds as an example. 
His model is a bit different from CLA/HTM, and he models inputs with an 
expected range for the duration.

On Tue, Apr 8, 2014 at 1:28 AM, Scheele, Manuel 
<[email protected]<mailto:[email protected]>> wrote:
Hi all,

I have been semi-successful with learning sequences and recognising noisy 
versions of them using NuPIC.

But now I am curious about something else. What if the noise is not in the 
sequence values (i.e. [1, 3, 4] plus noise becomes [1.1, 2.9, 4.05]) but in the 
'time domain' (i.e. [1, 1, 2, 2, 5, 5, 5] plus noise becomes [1, 1, 1, 2, 5, 
5]) meaning where the transitions between sequence values are the same but the 
'time' spend in one sequence state varies?

This happens, for example, when we speak: we we spend different amounts of time 
in different parts of a word depending on the speed of our speech. Does the TP 
deal with this? How can I see if NuPIC can deal with this sort of problem and 
if so, how well?

Regards,
Manuel
_______________________________________________
nupic mailing list
[email protected]<mailto:[email protected]>
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org


_______________________________________________
nupic mailing list
[email protected]<mailto:[email protected]>
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to