[ https://issues.apache.org/jira/browse/FLINK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15545063#comment-15545063 ]
ASF GitHub Bot commented on FLINK-2157: --------------------------------------- Github user gaborhermann commented on the issue: https://github.com/apache/flink/pull/1849 Hi all, What is the status of this PR? It would be relevant for us, because we might like to use the evaluation framework proposed here. See [FLINK-4713](https://issues.apache.org/jira/browse/FLINK-4713) for details. Can I do anything to help resolving the issues you've been discussing here? > Create evaluation framework for ML library > ------------------------------------------ > > Key: FLINK-2157 > URL: https://issues.apache.org/jira/browse/FLINK-2157 > Project: Flink > Issue Type: New Feature > Components: Machine Learning Library > Reporter: Till Rohrmann > Assignee: Theodore Vasiloudis > Labels: ML > Fix For: 1.0.0 > > > Currently, FlinkML lacks means to evaluate the performance of trained models. > It would be great to add some {{Evaluators}} which can calculate some score > based on the information about true and predicted labels. This could also be > used for the cross validation to choose the right hyper parameters. > Possible scores could be F score [1], zero-one-loss score, etc. > Resources > [1] [http://en.wikipedia.org/wiki/F1_score] -- This message was sent by Atlassian JIRA (v6.3.4#6332)