On Apr 6, 2012, at 02:56 , Andreas Mueller wrote:

> On 04/05/2012 11:17 PM, Vlad Niculae wrote:
>> I would like to see a reproduction of the standard neural net digits example:
>> 
>> http://ufldl.stanford.edu/wiki/images/8/84/SelfTaughtFeatures.png
>> 
> That looks like the weights of an autoencoder, right?
> Autoencoders are not part of the plan as far as I was concerned.
> I don't think filters in an MLP will look like this "magically" unless you
> tune your regularization quite carefully.

I remember doing the online machine learning class last year and just by 
training a neural net with one hidden layer for digit classification and 
visualizing the hidden layer, you would get something very similar to that, I 
was quite surprised. I'm curious now, I'll look for that code.

> 
> ------------------------------------------------------------------------------
> For Developers, A Lot Can Happen In A Second.
> Boundary is the first to Know...and Tell You.
> Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
> http://p.sf.net/sfu/Boundary-d2dvs2
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general


------------------------------------------------------------------------------
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to