[ 
https://issues.apache.org/jira/browse/SPARK-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanbo Liang updated SPARK-13448:
--------------------------------
    Description: 
This JIRA keeps a list of MLlib behavior changes in Spark 2.0. So we can 
remember to add them to the migration guide.

* SPARK-13429: change convergenceTol in LogisticRegressionWithLBFGS from 1e-4 
to 1e-6.
* SPARK-7780: LogisticRegressionWithLBFGS intercept will not be regularized. 
Meanwhile if users train binary classification model with L1/L2 Updater, it 
calls ML LogisiticRegresson implementation.
When without regularization, training with or without feature scaling will 
return the same solution by the same convergence rate(because they run the same 
code route).

  was:
This JIRA keeps a list of MLlib behavior changes in Spark 2.0. So we can 
remember to add them to the migration guide.

* SPARK-13429: change convergenceTol in LogisticRegressionWithLBFGS from 1e-4 
to 1e-6.
* SPARK-7780: LogisticRegressionWithLBFGS intercept will not be regularized. 
Meanwhile if without regularization, training with or without feature scaling 
will return the same solution by the same convergence rate(because they run the 
same code route).


> Document MLlib behavior changes in Spark 2.0
> --------------------------------------------
>
>                 Key: SPARK-13448
>                 URL: https://issues.apache.org/jira/browse/SPARK-13448
>             Project: Spark
>          Issue Type: Documentation
>          Components: ML, MLlib
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>
> This JIRA keeps a list of MLlib behavior changes in Spark 2.0. So we can 
> remember to add them to the migration guide.
> * SPARK-13429: change convergenceTol in LogisticRegressionWithLBFGS from 1e-4 
> to 1e-6.
> * SPARK-7780: LogisticRegressionWithLBFGS intercept will not be regularized. 
> Meanwhile if users train binary classification model with L1/L2 Updater, it 
> calls ML LogisiticRegresson implementation.
> When without regularization, training with or without feature scaling will 
> return the same solution by the same convergence rate(because they run the 
> same code route).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to