The paper definitely looks interesting and the authors are certainly
some giants in the field.
But it is actually not widely cited (139 citations since 2005), and I've
never seen it used.
I don't know why that is, and looking at the citations there doesn't
seem to be a lot of follow-up work.
I think this would need more validation before getting into sklearn.
Sebastian: This paper is distribution independent and doesn't need
bootstrapping, so it looks indeed quite nice.
On 2/6/19 1:19 PM, Sebastian Raschka wrote:
Hi Stuart,
I don't think so because there is no standard way to compute CI's. That goes
for all performance measures (accuracy, precision, recall, etc.). Some people
use simple binomial approximation intervals, some people prefer bootstrapping
etc. And it also depends on the data you have. In large datasets, binomial
approximation intervals may be sufficient and bootstrapping too expensive etc.
Thanks for sharing that paper btw, will have a look.
Best,
Sebastian
On Feb 6, 2019, at 11:28 AM, Stuart Reynolds <stu...@stuartreynolds.net> wrote:
https://papers.nips.cc/paper/2645-confidence-intervals-for-the-area-under-the-roc-curve.pdf
Does scikit (or other Python libraries) provide functions to measure the
confidence interval of AUROC scores? Same question also for mean average
precision.
It seems like this should be a standard results reporting practice if a method
is available.
- Stuart
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn