Hi Alex,

When I mentionned that to James, he seem to imply that this approach was
useful only to optimize many parameters, around 8 or more. You would have
to confirm this. I believe that he'll be around at the sprints. I far as
I am concerned, I don't optimize that number of parameters in the scikit.

Gaƫl

On Mon, Nov 14, 2011 at 10:06:36PM -0500, Alexandre Passos wrote:
> Recent work by James Bergstra demonstrated that careful hyperparameter
> optimization, as well as careless random sampling, is often better
> than manual searching for many problems. You can see results in the
> following nips paper:
> http://people.fas.harvard.edu/~bergstra/files/pub/11_nips_hyperopt.pdf

> I wonder if there's interest in adding some simple versions of these
> techniques to the scikit's very useful GridSearchCV? There is code
> available https://github.com/jaberg/hyperopt but it seems to be
> research code and it uses theano, so it's not applicable to the
> scikit.


------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to