[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2017-08-22 Thread yanboliang
Github user yanboliang closed the pull request at:

https://github.com/apache/spark/pull/14326


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2017-08-07 Thread WeichenXu123
Github user WeichenXu123 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r131763824
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,497 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{norm => brznorm, DenseVector => BDV}
+import breeze.optimize.{LBFGS => BreezeLBFGS, _}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.Dataset
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam
  

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2017-08-07 Thread WeichenXu123
Github user WeichenXu123 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r131762320
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,497 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{norm => brznorm, DenseVector => BDV}
+import breeze.optimize.{LBFGS => BreezeLBFGS, _}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.Dataset
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
--- End diff --

Change @Since


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2017-08-07 Thread WeichenXu123
Github user WeichenXu123 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r131764683
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam
+

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-25 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r72033498
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam
+   

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-25 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r72031141
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam
+   

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-25 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r72031054
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,473 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.broadcast.Broadcast
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * De

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992880
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam
+   */
 

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992598
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992548
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992524
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992526
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992494
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992474
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992412
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992373
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
--- End diff --

I don't think you need `@Since` at every symbol in the class (that was 
`@Since` itself with the same annotation).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-24 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71992352
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
--- End diff --

Are all `@Since` required? I'd think the one on line 82 would be enough.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-23 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/14326#discussion_r71975650
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/RobustRegression.scala ---
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.regression
+
+import scala.collection.mutable
+
+import breeze.linalg.{DenseVector => BDV}
+import breeze.optimize.{CachedDiffFunction, DiffFunction, LBFGS => 
BreezeLBFGS, LBFGSB => BreezeLBFGSB}
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.Since
+import org.apache.spark.internal.Logging
+import org.apache.spark.ml.PredictorParams
+import org.apache.spark.ml.feature.Instance
+import org.apache.spark.ml.linalg.{Vector, Vectors}
+import org.apache.spark.ml.linalg.BLAS._
+import org.apache.spark.ml.param.{DoubleParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared._
+import org.apache.spark.ml.util._
+import org.apache.spark.mllib.linalg.VectorImplicits._
+import org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+import org.apache.spark.rdd.RDD
+import org.apache.spark.sql.{Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.storage.StorageLevel
+
+/**
+ * Params for robust regression.
+ */
+private[regression] trait RobustRegressionParams extends PredictorParams 
with HasRegParam
+  with HasMaxIter with HasTol with HasFitIntercept with HasStandardization 
with HasWeightCol {
+
+  /**
+   * The shape parameter to control the amount of robustness. Must be > 
1.0.
+   * At larger values of M, the huber criterion becomes more similar to 
least squares regression;
+   * for small values of M, the criterion is more similar to L1 regression.
+   * Default is 1.35 to get as much robustness as possible while retaining
+   * 95% statistical efficiency for normally distributed data.
+   */
+  @Since("2.1.0")
+  final val m = new DoubleParam(this, "m", "The shape parameter to control 
the amount of " +
+"robustness. Must be > 1.0.", ParamValidators.gt(1.0))
+
+  /** @group getParam */
+  @Since("2.1.0")
+  def getM: Double = $(m)
+}
+
+/**
+ * Robust regression.
+ *
+ * The learning objective is to minimize the huber loss, with 
regularization.
+ *
+ * The robust regression optimizes the squared loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\leq M }}}
+ * and the absolute loss for the samples where
+ * {{{ |\frac{(y - X \beta)}{\sigma}|\geq M }}},
+ * where \beta and \sigma are parameters to be optimized.
+ *
+ * This supports two types of regularization: None and L2.
+ *
+ * This estimator is different from the R implementation of Robust 
Regression
+ * ([[http://www.ats.ucla.edu/stat/r/dae/rreg.htm]]) because the R 
implementation does a
+ * weighted least squares implementation with weights given to each sample 
on the basis
+ * of how much the residual is greater than a certain threshold.
+ */
+@Since("2.1.0")
+class RobustRegression @Since("2.1.0") (@Since("2.1.0") override val uid: 
String)
+  extends Regressor[Vector, RobustRegression, RobustRegressionModel]
+  with RobustRegressionParams with Logging {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("robReg"))
+
+  /**
+   * Sets the value of param [[m]].
+   * Default is 1.35.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setM(value: Double): this.type = set(m, value)
+  setDefault(m -> 1.35)
+
+  /**
+   * Sets the regularization parameter.
+   * Default is 0.0.
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setRegParam(value: Double): this.type = set(regParam, value)
+  setDefault(regParam -> 0.0)
+
+  /**
+   * Sets if we should fit the intercept.
+   * Default is true.
+   * @group setParam
+   */
 

[GitHub] spark pull request #14326: [SPARK-3181] [ML] Implement RobustRegression with...

2016-07-23 Thread yanboliang
GitHub user yanboliang opened a pull request:

https://github.com/apache/spark/pull/14326

[SPARK-3181] [ML] Implement RobustRegression with huber loss.

## What changes were proposed in this pull request?
The current implementation is a straight forward porting for Python 
scikit-learn ```HuberRegressor```, so it produces the same result with that.
The code is used for discussion and please overpass trivial issues now, 
since I think we may have slightly different idea for our Spark implementation.

Here I listed some major issues should be discussed:
* Objective function.

We use Eq.(6) in [A robust hybrid of lasso and ridge 
regression](http://statweb.stanford.edu/~owen/reports/hhu.pdf) as the objective 
function.

![image](https://cloud.githubusercontent.com/assets/1962026/17076521/02a3f054-5069-11e6-895d-3c904e056ba2.png)
But the convention is different from other Spark ML code such as 
```LinearRegression``` in two aspects:
• The loss is total loss rather than mean loss. We use 
```lossSum/weightSum``` as the mean loss in ```LinearRegression```.
• We do not multiply the loss function and L2 regularization by 1/2. This 
is not a problem since it does not affect the result if we multiply the whole 
formula by a factor.
So should we turn to use the modified objective function like following 
which will be consistent with other Spark ML code?

![image](https://cloud.githubusercontent.com/assets/1962026/17076522/14eceb4e-5069-11e6-84ae-ecfaf3ea12ed.png)
* Implement a new class ```RobustRegression``` or a new loss function for 
```LinearRegression```.

Both ```LinearRegression``` and ```RobustRegression``` accomplish the same 
goal, but the output of ```fit``` will be different: 
```LinearRegressionModel``` and ```RobustRegressionModel```. The former only 
contains ```coefficients```, ```intercept```; but the latter contains 
```coefficients```, ```intercept```, ```scale/sigma``` (and even the outlier 
samples similar to sklearn ```HuberRegressor.outliers_```). It will also 
involve save/load compatibility issue if we combine the two models become one. 
One trick method is we can drop ```scale/sigma``` and make the ```fit``` by 
this huber cost function still output ```LinearRegressionModel```, but I don't 
think it's an appropriate way since it will miss some model attributes. So I 
implemented ```RobustRegression``` in a new class, and we can port this loss 
function to ```LinearRegression``` if needed at later time. 

## How was this patch tested?
Unit tests.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanboliang/spark spark-3181

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14326.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14326


commit 8fd0ca1954f964e89cf81379fdaff0844afd7253
Author: Yanbo Liang 
Date:   2016-07-23T06:54:58Z

Implement RobustRegression with huber loss.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org