[ 
https://issues.apache.org/jira/browse/SPARK-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311752#comment-15311752
 ] 

zhengruifeng commented on SPARK-15617:
--------------------------------------

Agreed.
In {{MulticlassClassificationEvaluator}}, I will remove precision/recall but 
keep f1 (weighted averaged f1-measure, not equal to accury)
For {{MulticlassMetrics}}, I will just update the user guide.
Is this OK?

> Clarify that fMeasure in MulticlassMetrics and 
> MulticlassClassificationEvaluator is "micro" f1_score
> ----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-15617
>                 URL: https://issues.apache.org/jira/browse/SPARK-15617
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation, ML, MLlib
>            Reporter: Joseph K. Bradley
>            Priority: Minor
>
> See description in sklearn docs: 
> [http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html]
> I believe we are calculating the "micro" average for {{val fMeasure: 
> Double}}.  We should clarify this in the docs.
> I'm not sure if "micro" is a common term, so we should check other libraries 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to