I believe the classifier calculates a moving average for each "bucket". If the SDR's coming out of the HTM are constant, the bucket will be constant so it might just predict a moving average of the input. Could this be happening here? If so, it suggests the parameters for the HTM are off somewhere. The best way to check this is to directly examine the SDR's coming out of SP and TP and make sure you are getting "good" SDRs.
--Subutai On Tue, Feb 3, 2015 at 7:29 AM, Nicholas Mitri <[email protected]> wrote: > Thanks David, > > I’ve been going through the code of the low level components and I can't > find any piece of code that explains that behavior. I’d agree that it must > be in the OPF but I’m seeing evidence of it even in my non-OPF > implementation. > > At this point, I’m really wanting the feedthrough to be the issue because > it’s the one thing I haven’t attempted to tweak to get better results. > Thanks for your input! > > best, > Nick > > > > On Feb 3, 2015, at 5:20 PM, cogmission1 . <[email protected]> > wrote: > > Nicholas, > > I can't really give a definitive answer to your question of whether, "... > [HTM] simply passes through the last input it's seen and uses that as a > prediction..." - But I *can* say definitely that this isn't a property of > any of the lower-level components or algorithms (i.e. encoder, spatial > pooler, temporal memory, cla classifier). > > This means that if this exists, it must be a property of the "containing" > infrastructure such as a Region/RegionImpl or some property of an OPF > container (if you're using that). > > Just thought that clarification might be helpful if you attempt to track > this down yourself... > > David > > On Tue, Feb 3, 2015 at 8:54 AM, Nicholas Mitri <[email protected]> > wrote: > >> Hey all, >> >> As far as I’ve gathered, when HTM can’t make a proper prediction, it >> simply passes through the last input it’s seen and uses that as a >> prediction (which is why plots comparing predictions with actual values >> usually start out with a lag). >> >> The problem with this approach is that when using multiple HTM regions >> each trained on their own data in a classification setup, a region that is >> totally confused by the sequence it’s seeing (since it never learned it) >> will end up outputting predictions that are delayed inputs and the final >> prediction sequence will have a similarity to the original sequence that >> you wouldn’t expect an untrained region to have. >> >> More concretely, in my application, I’m passing sequences of 2D >> coordinates that trace a number. Even though I only train a single region >> to produce low anomaly for that number, ALL other regions output a >> predicted sequence that looks similar i.e. region assigned to recognize ‘2’ >> outputs a trace that looks like a ‘1’ when a ‘1’ sequence is shown to it >> even though it’s never seen a ‘1’!! So do all other regions. >> >> I think this is what’s causing my classification results (based on >> anomaly) to be so sub par. Is that a right assessment of the consequences >> of using this feed-through approach? If so, where exactly is the code can I >> make a change to prevent HTM from doing it? >> >> Thank you, >> Nicholas >> > > > > -- > *We find it hard to hear what another is saying because of how loudly "who > one is", speaks...* > > >
