[ https://issues.apache.org/jira/browse/SPARK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14514905#comment-14514905 ]
Rakesh Chalasani commented on SPARK-5256: ----------------------------------------- Just a thought, as much as I know, please correct if I am wrong, the optimizer usually takes number of iterations as the stopping criteria for the optimizer. For example, in classification or regression tasks, there is no way to do early stopping using a validation set or understand if the rate of change in the loss function flattened. To felicitate this, how about having a set functions computing "tolerance", at the end of each iteration or a after a set number of iterations to ensure early stopping? Another alternative it to not have this at all, and leave to the ml-pipeline CrossValidator. > Improving MLlib optimization APIs > --------------------------------- > > Key: SPARK-5256 > URL: https://issues.apache.org/jira/browse/SPARK-5256 > Project: Spark > Issue Type: Umbrella > Components: MLlib > Affects Versions: 1.2.0 > Reporter: Joseph K. Bradley > > *Goal*: Improve APIs for optimization > *Motivation*: There have been several disjoint mentions of improving the > optimization APIs to make them more pluggable, extensible, etc. This JIRA is > a place to discuss what API changes are necessary for the long term, and to > provide links to other relevant JIRAs. > Eventually, I hope this leads to a design doc outlining: > * current issues > * requirements such as supporting many types of objective functions, > optimization algorithms, and parameters to those algorithms > * ideal API > * breakdown of smaller JIRAs needed to achieve that API > I will soon create an initial design doc, and I will try to watch this JIRA > and include ideas from JIRA comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org