2011/11/4 Andreas Müller <[email protected]>:
> Are you using pure Python at the moment?
> Where can I find your code? And is the goal of your code to
> be included in the scikits?

My goal is to improve on somebody else's result and get a paper
published ;), but if the sklearn community can peer review and adopt
the code I use to obtain that result, I'd be more than happy.

This is more or less what I used:

https://github.com/larsmans/scikit-learn/tree/mlperceptron
Again, with weight vectors loaded from a Matlab file by hand, so no fit yet.

> I think it is necessary to have minibatch learning and so I think
> building that into the code from the beginning is good.

Alright.

>> Logistic activation functions seem fashionable; that's what Bishop and
>> other textbooks use. I'm not sure if there's a big difference, but it
>> seems to me that gradient computations might be slightly more
>> efficient (guesswork, I admit). We can always add a steepness
>> parameter later.
> In my personal experience, tanh works better. LeCun uses tanh ;)

That's always a good argument ;)

> RPROP is very easy to implement. I use it in my lab all the time.
> I have no personal experience with IRPROP-? How is that different
> than IRPROP? What is RPROP+? Can you give me references?

http://sci2s.ugr.es/keel/pdf/algorithm/articulo/2003-Neuro-Igel-IRprop+.pdf

The difference between RPROP+ and - is that + does backtracking, so it
needs more memory. In the Improved RPROP variant, + or - hardly makes
any difference.

-- 
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam

------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to