[ https://issues.apache.org/jira/browse/SPARK-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294403#comment-14294403 ]
Chris T commented on SPARK-5436: -------------------------------- I think, then, the only addition needed is to retain the mean loss on every iteration. This is computed and emitted to the log on each build iteration: https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.scala#L179 The question then becomes where to store this error value. Is it a property of the tree or the model? For a DecisionTree, I can see how the concept of the error applies. For a random forest, since each tree is independent of the others, that also makes sense. But for a GBT model, the model for N trees is dependent on the model with N-1 trees, so if I extract the Nth tree and request the error value, I have to be aware that this is not the error for this tree alone. I suspect this is fine...anyone building a GBT model would likely understand this. It's jsut a little weird to store a property of an object that is dependent on other objects in the ensemble. > Validate GradientBoostedTrees during training > --------------------------------------------- > > Key: SPARK-5436 > URL: https://issues.apache.org/jira/browse/SPARK-5436 > Project: Spark > Issue Type: Improvement > Components: MLlib > Affects Versions: 1.3.0 > Reporter: Joseph K. Bradley > > For Gradient Boosting, it would be valuable to compute test error on a > separate validation set during training. That way, training could stop early > based on the test error (or some other metric specified by the user). -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org