I am not using KNN, I am routing a standard neural network through the
HTM's on cells (so only small subnetworks are activate at a time). That is
probably why it is working better for me.


On Sun, Jan 18, 2015 at 6:20 PM, <[email protected]> wrote:

> Hi:
>
>
> Hi, Eric. 95% and in 5 mins, That's pretty good if just with a SP + KNN.
> Did you turn the SP learning on? or just turn off and use it to get the
> SDR. Recently, i changed the training order - the order of training
> sequences. before that, I just followed the order of MNIST dataset. Now the
> training sequences are from 0 to 9.  I got 97% on a small dataset
> (6000/1000). I am still wondering if there are some errors in my code which
> I just added, because all of the errors happen to input 0 and 1. So, now i
> am trying to train the full size dataset with SP learning on and classifier
> learning on to see what will happen. With the current parameters, training
> is really slow. I am still waiting for the result.
>
> Actually, I'm thinking about using other non-python engine. I am using the
> nupic python currently.
>
> The world record 0.23 is using deep CNN as the basic unit. Then using them
> as column structures...Anyway, the result is applaudable.
>
>
> What about the parameters which you used for the SP?
>
>
>
> Thank you.
> An Qi
> Tokyo University of Agriculture and Technology - Nakagawa Laboratory
> 2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
> [email protected]
>
>
> On Sun, 18 Jan 2015 12:05:42 -0500
>  Eric Laukien <[email protected]> wrote:
>
>> Hmm, I got 95% after five minutes. Perhaps you can try CHTM on this task.
>> Especially with the latest spatial pooler, which is far more efficient
>> than
>> what I had before, it should work very well.
>> Also make sure to use multiple layers, and make the NN pass through all of
>> them.
>> The world record is held by Yann Lecun using "sparse codes" which are
>> SDRs.
>> So the approach definitely works.
>>
>> Regards,
>> Eric Laukien
>>
>>
>> On Sun, Jan 18, 2015 at 5:06 AM, Fergal Byrne <
>> [email protected]>
>> wrote:
>>
>>  Hi An Qi,
>>>
>>> That's very interesting, thanks for sharing. Could you place your code
>>> and
>>> setup on Github so we can take a look at exactly what you did? It seems
>>> at
>>> first glance that you're just using SP, which is perhaps the least
>>> powerful
>>> part of HTM. I think a saccading system which also does Temporal Pooling
>>> (which we haven't quite got yet) would be able to do a much better job on
>>> this kind of task, but 89.6% is still a very good start for a plain
>>> SP-based approach.
>>>
>>> Very well done! Hopefully we can help you do even better.
>>>
>>> Regards,
>>>
>>> Fergal Byrne
>>>
>>> On Sun, Jan 18, 2015 at 7:00 AM, <[email protected]> wrote:
>>>
>>>  Hello.
>>>>
>>>> Sorry for the last email. Thx to the rich formatting :( ... I have to
>>>> type again.
>>>>
>>>> Recently, I got the result of the test. I followed the source code and
>>>> built the Spatial Pooler + KNN classifier. Then I extracted images from
>>>> MNIST dataset(Train/test : 60000/10000) and parsed them to the model. I
>>>> tried to test with different parameters (using small dataset:
>>>> Train/Test -
>>>> 6000/1000 ), the best recognition result is about 87.6%. After that, i
>>>> tried the full size MNIST dataset, the result is 89.6%. Currently, this
>>>> is
>>>> the best result I got.
>>>>
>>>> Here is the statistics. It shows the error counts for each digits. the
>>>> Row presents the input digit. the column presents the recognition
>>>> result.
>>>> Most of the "7" are recognized as "9". It seems the SDR from SP is still
>>>> not good enough for the classifier.
>>>>
>>>> I found some interesting things. When I let the "inputDimensions" and
>>>> "columnDimensions" be "784" and "1024", the result will be around 68%.
>>>> If i
>>>> use "(28,28)","(32,32)" and keep others the same, the result will be
>>>> around
>>>> 82%. That 's a lot of difference. It seems the array shape will effect
>>>> SP a
>>>> lot.
>>>>
>>>> Did any one get a better result? Does any one have some suggestion about
>>>> the parameters or others?
>>>>
>>>> Thank you.
>>>> An Qi
>>>> Tokyo University of Agriculture and Technology - Nakagawa Laboratory
>>>> 2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
>>>> [email protected]
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Fergal Byrne, Brenter IT
>>>
>>> http://inbits.com - Better Living through Thoughtful Technology
>>> http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne
>>>
>>> Founder of Clortex: HTM in Clojure -
>>> https://github.com/nupic-community/clortex
>>>
>>> Author, Real Machine Intelligence with Clortex and NuPIC
>>> Read for free or buy the book at https://leanpub.com/realsmartmachines
>>>
>>> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
>>> http://euroclojure.com/2014/
>>> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>>>
>>> e:[email protected] t:+353 83 4214179
>>> Join the quest for Machine Intelligence at http://numenta.org
>>> Formerly of Adnet [email protected] http://www.adnet.ie
>>>
>>>
>
>

Reply via email to