[ https://issues.apache.org/jira/browse/SPARK-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311218#comment-15311218 ]
Sean Owen commented on SPARK-15617: ----------------------------------- Personally I support that. I think that the only metric that makes sense to expose in the multiclass case, which isn't predicated on one class, is accuracy. WDYT [~podongfeng]? > Clarify that fMeasure in MulticlassMetrics and > MulticlassClassificationEvaluator is "micro" f1_score > ---------------------------------------------------------------------------------------------------- > > Key: SPARK-15617 > URL: https://issues.apache.org/jira/browse/SPARK-15617 > Project: Spark > Issue Type: Documentation > Components: Documentation, ML, MLlib > Reporter: Joseph K. Bradley > Priority: Minor > > See description in sklearn docs: > [http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html] > I believe we are calculating the "micro" average for {{val fMeasure: > Double}}. We should clarify this in the docs. > I'm not sure if "micro" is a common term, so we should check other libraries > too. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org