[ 
https://issues.apache.org/jira/browse/SPARK-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15370871#comment-15370871
 ] 

Yanbo Liang edited comment on SPARK-3181 at 7/11/16 2:35 PM:
-------------------------------------------------------------

[~dbtsai] [~MechCoder] There is one problem need to be discussed: The scaling 
factor {{\sigma}} has to be estimated as well, and in Eq.(6) {{\sigma}} has to 
be >= 0. So we can have two alternatives:
* #1, Use {{\sigma}} directly and L-BFGS-B as the solver. The breeze library 
supports L-BFGS-B recently, so we need to bump up the dependent breeze version 
to 0.12 firstly. We can only support L2 regularization as what you did in 
scikit-learn. However, we can not support L1 or elasticNet regularization 
currently and in the near future, since there is no plan to add OWLQN-B to 
breeze as https://github.com/scalanlp/breeze/issues/455 said.
* #2, Replace {{\sigma}} to {{\exp(\alpha)}}, then we can support elasticNet 
regularization like what we do in {{LinearRegression}}. But there is no proof 
that it will be jointly convex with {{\alpha}} for the huber loss function, we 
may fall into the local optimum.

I prefer to opinion #1 which will give the exact solution and looking forward 
to hear your thoughts. If opinion #1 is ok for you, I will first open a ticket 
to test and bump up breeze version to 0.12.


was (Author: yanboliang):
[~dbtsai] [~MechCoder] There is one problem need to be discussed: The scaling 
factor {{\sigma}} has to be estimated as well, and in Eq.(6) {{\sigma}} has to 
be >= 0. So we can have two alternatives:
* #1, Use {{\sigma}} directly and L-BFGS-B as the solver. The breeze library 
supports L-BFGS-B recently, so we need to bump up the dependent breeze version 
to 0.12 firstly. We can only support L2 regularization as what you did in 
scikit-learn. However, we can not support L1 or elasticNet regularization 
currently and in the near future, since there is no plan to add OWLQN-B to 
breeze as https://github.com/scalanlp/breeze/issues/455 said.
* #2, Replace {{\sigma}} to {{\exp(\alpha)}}, then we can support elasticNet 
regularization like what we do in {{LinearRegression}}. But there is no proof 
that it will be jointly convex with {{\alpha}} for the huber loss function, we 
may fall into the local optimum.

I prefer to opinion #1 which will give the exact solution and looking forward 
to hear your thoughts.

> Add Robust Regression Algorithm with Huber Estimator
> ----------------------------------------------------
>
>                 Key: SPARK-3181
>                 URL: https://issues.apache.org/jira/browse/SPARK-3181
>             Project: Spark
>          Issue Type: New Feature
>          Components: ML, MLlib
>            Reporter: Fan Jiang
>            Assignee: Yanbo Liang
>              Labels: features
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> Linear least square estimates assume the error has normal distribution and 
> can behave badly when the errors are heavy-tailed. In practical we get 
> various types of data. We need to include Robust Regression  to employ a 
> fitting criterion that is not as vulnerable as least square.
> In 1973, Huber introduced M-estimation for regression which stands for 
> "maximum likelihood type". The method is resistant to outliers in the 
> response variable and has been widely used.
> The new feature for MLlib will contain 3 new files
> /main/scala/org/apache/spark/mllib/regression/RobustRegression.scala
> /test/scala/org/apache/spark/mllib/regression/RobustRegressionSuite.scala
> /main/scala/org/apache/spark/examples/mllib/HuberRobustRegression.scala
> and one new class HuberRobustGradient in 
> /main/scala/org/apache/spark/mllib/optimization/Gradient.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to