Hi,

I understand that ROC-AUC is defined for binary classification problems..
But is there an *equivalent metric* for multi-label problems? i.e.
something that does not pick a specific threshold (precision/recall regime)
for each label, but that returns instead a summarizing measure of the
overall performance (i.e. some statistic that summarizes how the
distribution of scores across all labels correlate with the ground truth).

Also, say that I want to  way to run GridSearchCV to optimize my own metric
 in a multi-label problem (e.g. the average AUC score across all labels).
Does anyone have any pointers to specific steps on how to do this?

Thanks,

Josh
------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to