2013/7/25 Josh Wasserstein <[email protected]>:
> Thank you Olivier. I went through that paper and I agree, it looks like
> implementing micro-AUC or macro-AUC should not be that hard.  I will try to
> implement within the next week. I have have never contributed to a project
> in GitHub, so I am not sure to what extent my code would meet the standards
> but I am happy to try.
>
> In the mean time, is there anything similar to an AUC metric that scikit
> supports when working with GridSearchCV in a multi-label setting? I am
> looking for some compromise between precision and recall that indirectly
> optimizes for the AUC score of each label .

You can try the f1 score that is a balanced score (a tradeoff between
precision and recall) that is a reasonable score for imbalanced
multiclass dataset.

It supports both micro and macro averaging.


--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel

------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to