[ https://issues.apache.org/jira/browse/SPARK-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14323323#comment-14323323 ]
Joseph K. Bradley commented on SPARK-5436: ------------------------------------------ Yep, that sounds like what I had in mind: {code} def evaluateEachIteration(data: RDD[LabeledPoint], evaluator or maybe use training metric): Array[Double] {code} where it essentially calls predict() once but keeps the intermediate results after each boosting stage, so that it runs in the same big-O time as predict(). > Validate GradientBoostedTrees during training > --------------------------------------------- > > Key: SPARK-5436 > URL: https://issues.apache.org/jira/browse/SPARK-5436 > Project: Spark > Issue Type: Improvement > Components: MLlib > Affects Versions: 1.3.0 > Reporter: Joseph K. Bradley > > For Gradient Boosting, it would be valuable to compute test error on a > separate validation set during training. That way, training could stop early > based on the test error (or some other metric specified by the user). -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org