[ 
https://issues.apache.org/jira/browse/SPARK-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiangrui Meng updated SPARK-2505:
---------------------------------
    Fix Version/s:     (was: 1.2.0)

> Weighted Regularizer
> --------------------
>
>                 Key: SPARK-2505
>                 URL: https://issues.apache.org/jira/browse/SPARK-2505
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>            Reporter: DB Tsai
>
> The current implementation of regularization in linear model is using 
> `Updater`, and this design has couple issues as the following.
> 1) It will penalize all the weights including intercept. In machine learning 
> training process, typically, people don't penalize the intercept. 
> 2) The `Updater` has the logic of adaptive step size for gradient decent, and 
> we would like to clean it up by separating the logic of regularization out 
> from updater to regularizer so in LBFGS optimizer, we don't need the trick 
> for getting the loss and gradient of objective function.
> In this work, a weighted regularizer will be implemented, and users can 
> exclude the intercept or any weight from regularization by setting that term 
> with zero weighted penalty. Since the regularizer will return a tuple of loss 
> and gradient, the adaptive step size logic, and soft thresholding for L1 in 
> Updater will be moved to SGD optimizer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to