Aman Rawat created SPARK-21178: ---------------------------------- Summary: Add support for label specific metrics in MulticlassClassificationEvaluator Key: SPARK-21178 URL: https://issues.apache.org/jira/browse/SPARK-21178 Project: Spark Issue Type: Improvement Components: ML Affects Versions: 2.1.1 Reporter: Aman Rawat
MulticlassClassificationEvaluator is restricted to the global metrics - f1, weightedPrecision, weightedRecall, accuracy However, we have a requirement where we would want to optimize the learning on metric for a specific label - for instance, true positive rate (label 'B') For example : Take a fraud detection use-case with labels 'good' and 'fraud' being passed to a manual verification team. We want to maximize the true-positive rate of ('fraud') label, so that whenever the model predicts a data point as 'good', it has a strong likelihood of it being 'good', and the manual team can ignore it. While it's ok to predict some 'good' data points as 'fraud', as it will be taken care by the manual verification team. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org