Hello all,

My understanding is that variability in what you refer to as "noise in the
time domain" is handled not by a region, but by the hierarchy. Consider the
following example:

You are at a Gnarls Barkley concert. He is performing "Crazy". You know the
song very well and are familiar both with the lyrics and the pace of it.
However, you have learned the song by listening to the same recording over
and over again.

Gnarls Barkley sings "I remember when, I remember, I remember when" and you
know that the sound to come is "I lost my mind". You also know when that
sound is supposed to enter your hearing. However, this being a live show,
Gnarls Barkley makes a pause slightly longer than the one you remember from
the recording. Here's what happens in your brain:
On some lower auditory level (A1?), the prediction of hearing the exact
sounds you were expecting is not correct. But on a layer slightly higher in
the hierarchy, there is awareness that you are at a Gnarls Barkley concert,
that he is singing Crazy and that the next words are "I lost my mind". As
such, the higher region, to which the exception is escalated, sends
down the same sensory prediction it was expecting a moment ago. It will do
so for a period of time until the anomaly is so large that you might
suspect that Gnarls Barkley has stopped singing for some reason. As such,
when Gnarls Barkley ends his pause, the lower region will get the sensory
input and the prediction from a higher region that match.

Cheers,

Sergey

P.S. Sorry if this response gets sent twice, I used an email address, which
is not subscribed to the list in my first response.


On Wed, Apr 16, 2014 at 12:40 PM, Julie Pitt <[email protected]> wrote:

> A NuPIC novice myself, I suspect that to account for noise in the duration
> of a particular pattern, you would want to have a coarser grained
> aggregation of your input values when swarming. There are aggregation
> params in the model (although I can't remember ATM precisely what they're
> called). As long as the aggregation is mostly covering the noise, I think
> this would work.
>
> Interestingly, in *How to Create a Mind*, Ray Kurzweil discusses the
> problem of variation in duration of an input pattern, using speech sounds
> as an example. His model is a bit different from CLA/HTM, and he models
> inputs with an expected range for the duration.
>
>
> On Tue, Apr 8, 2014 at 1:28 AM, Scheele, Manuel <
> [email protected]> wrote:
>
>> Hi all,
>>
>> I have been semi-successful with learning sequences and recognising noisy
>> versions of them using NuPIC.
>>
>> But now I am curious about something else. What if the noise is not in
>> the sequence values (i.e. [1, 3, 4] plus noise becomes [1.1, 2.9, 4.05])
>> but in the 'time domain' (i.e. [1, 1, 2, 2, 5, 5, 5] plus noise becomes [1,
>> 1, 1, 2, 5, 5]) meaning where the transitions between sequence values are
>> the same but the 'time' spend in one sequence state varies?
>>
>> This happens, for example, when we speak: we we spend different amounts
>> of time in different parts of a word depending on the speed of our speech.
>> Does the TP deal with this? How can I see if NuPIC can deal with this sort
>> of problem and if so, how well?
>>
>> Regards,
>> Manuel
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to