On 07/09/2013 12:44 AM, Josh Wasserstein wrote:
> Peter - Yes. That also puzzles me. So odd.
>
> Thanks Olivier - I am using auc_score, not roc_curve. My scikit-learn
> installation does not complain about it. I will try to get the master
> git installed.
>
Well, it doesn't complain, but it doesn
Peter - Yes. That also puzzles me. So odd.
Thanks Olivier - I am using auc_score, not roc_curve. My scikit-learn
installation does not complain about it. I will try to get the master git
installed.
Josh
On Mon, Jul 8, 2013 at 4:48 PM, Peter Prettenhofer <
peter.prettenho...@gmail.com> wrote:
>
What is actually quite interesting is that the "worst" model has AUC of
0.29 which is actually AUC 0.71 if you invert the predictions.
2013/7/8 Olivier Grisel
> Alternatively you can use the `score_func=f1_score` in 0.13 look for
> models that trade off precision and recall on unbalanced datase
Alternatively you can use the `score_func=f1_score` in 0.13 look for
models that trade off precision and recall on unbalanced datasets.
--
Olivier
--
This SF.net email is sponsored by Windows:
Build for Windows Store.
h
You are using sklearn 0.13 right? I am pretty sure that it was not
possible to grid search vs ROC AUC back then. In master it's possible
to grid search using ROC AUC using:
GridSearchCV(clf, params_grid, scoring='roc_auc').fit(X, y)
--
Olivier
I am getting extremely poor SVM performance on a simple binary learning
problem. I am doing an exhaustive grid search, but most of the AUC scores I
obtain are below 0.5 (basically the performance of a random classifier)
Here is my feature matrix X:
https://gist.github.com/ribonoous/5952080
and he