Thanks for all the feedback.
I pushed an update this morning which addressing some of the easy fixes
that were brought up, as well as adding the final two exercises. Thanks!
http://jakevdp.github.com/tutorial/astronomy/exercises.html
Jake
Lars Buitinck wrote:
> 2012/2/27 Jacob VanderPlas :
>
2012/2/27 Jacob VanderPlas :
> If you have a few minutes to look it over, I'd really appreciate some
> feedback as I add the finishing touches this week. Also, as there's no
> way I'll get through all of it in a two hour tutorial, I'd like feedback
> on which parts you think I should focus on!
Lo
2012/2/27 Matthias Ekman :
> thanks Olivier, it fails on both Linux and Mac OS X
>
> here the output of sudo gdb python from a Linux machine
> http://pastebin.com/4sU5RCtg
>
> and here of (gdb) bt
> http://pastebin.com/bt8ULLwn
>
> I am sorry, I have no clue on how to debug this properly...
This
I am just repeating meta-optimize, and this is a bad, bad thing, but I
found that the following paper:
http://www.jmlr.org/papers/volume11/yuan10c/yuan10c.pdf
Was a good read.
In particular, it points out what might be interesting to implement in
the scikit if we want to implement our own l1-pen
thanks Olivier, it fails on both Linux and Mac OS X
here the output of sudo gdb python from a Linux machine
http://pastebin.com/4sU5RCtg
and here of (gdb) bt
http://pastebin.com/bt8ULLwn
I am sorry, I have no clue on how to debug this properly...
Best,
Matthias
On 2/27/12 8:34 AM, scikit-le
thanks guys. That makes sense!
Best,
Matthias
On 2/27/12 10:52 AM, Gael Varoquaux wrote:
> On Mon, Feb 27, 2012 at 10:49:36AM +0100, Olivier Grisel wrote:
>> Why alpha and rho to 0? Usual rho is good around 0.8 and alpha should
>> be adjusted by grid search.
> We are both bots tuned to respond
On Mon, Feb 27, 2012 at 6:15 PM, Olivier Grisel
wrote:
> Cool, I did not know that the binary case was handled as well.
Actually most of the logic is in LabelBinarizer.
>>> from sklearn.preprocessing import LabelBinarizer
>>> lb = LabelBinarizer()
>>> lb.fit_transform([1, 2, 2, 2])
array([[ 0.]
On Mon, Feb 27, 2012 at 10:49:36AM +0100, Olivier Grisel wrote:
> Why alpha and rho to 0? Usual rho is good around 0.8 and alpha should
> be adjusted by grid search.
We are both bots tuned to respond the same way, to the same situations,
as proven also on
http://metaoptimize.com/qa/questions/933
2012/2/27 Matthias Ekman :
> thanks for all the helpful remarks! That's exactly what I wanted to
> know. However I am a bit surprised by the low performance of Elastic Net
> in comparison to logit (both using L1 regularization and test/training
> on the full dataset). Am I overseeing something obvi
On Mon, Feb 27, 2012 at 10:46:40AM +0100, Matthias Ekman wrote:
> clf = OneVsRestClassifier(ElasticNet(alpha=0., rho=0.))
> y_pred = clf.fit(X,y).predict(X)
> print 'acc enet:',zero_one_score(y,y_pred)*100
alpha=0: you are not regularizing at all!
In general, it doesn't make much sens to use a le
thanks for all the helpful remarks! That's exactly what I wanted to
know. However I am a bit surprised by the low performance of Elastic Net
in comparison to logit (both using L1 regularization and test/training
on the full dataset). Am I overseeing something obvious here?
acc enet: 69.0
acc lo
On Mon, Feb 27, 2012 at 10:06:39AM +0100, Matthias Ekman wrote:
> I guess my question was more on how to force the fit method to learn a
> binary output. Using my code below, it assumes a regression problem. How
> do I use Elastic Net for classification in practice?
Subclass the ElasticNet class
2012/2/27 Mathieu Blondel :
> On Mon, Feb 27, 2012 at 6:06 PM, Matthias Ekman
> wrote:
>
>> do I use Elastic Net for classification in practice?
>
> from sklearn.multiclass import OneVsRestClassifier
>
> clf = OneVsRestClassifier(ElasticNet(alpha=0.1, rho=0.7))
>
> will work even for binary classi
On Mon, Feb 27, 2012 at 6:06 PM, Matthias Ekman
wrote:
> do I use Elastic Net for classification in practice?
from sklearn.multiclass import OneVsRestClassifier
clf = OneVsRestClassifier(ElasticNet(alpha=0.1, rho=0.7))
will work even for binary classification.
Mathieu
---
You can derive the class and override the predict method:
class ElasticNetClassifier(ElasticNet):
def predict(self, X):
return (super(ElasticNetClassifier, self).predict(X) > 0).astype(np.int)
Disclaimer: untested code.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogri
I would use elastic-net with y = -1 or 1 such that np.mean(y) == 0 and then when
you predict threshold the predictions at 0.
Alex
On Mon, Feb 27, 2012 at 10:06 AM, Matthias Ekman
wrote:
> thanks Alexandre and Olivier. Indeed I don't expect to get better
> performance in comparison to logistic re
thanks Alexandre and Olivier. Indeed I don't expect to get better
performance in comparison to logistic regression with L1 regularization.
I guess my question was more on how to force the fit method to learn a
binary output. Using my code below, it assumes a regression problem. How
do I use Elas
17 matches
Mail list logo