[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2018-05-10 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@thesuperzapper unfortunately I haven't been able to keep up-to-date with 
Spark over the past year (first year of grad school has been occupying me). I 
don't think I can make any contributions right now or for a while. Are you 
thinking about taking over?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2017-07-17 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@HyukjinKwon sorry for the inactivity (I have some free time now). 
@jkbradley is SPARK-4240 still on the roadmap? I can resume work on this (and 
the subsequent GBT work)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2017-03-15 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r106311077
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impurity/ApproxBernoulliImpurity.scala
 ---
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.tree.impurity
+
+import org.apache.spark.annotation.{DeveloperApi, Since}
+import org.apache.spark.mllib.tree.impurity._
+
+/**
+ * [[ApproxBernoulliImpurity]] currently uses variance as a (proxy) 
impurity measure
+ * during tree construction. The main purpose of the class is to have an 
alternative
+ * leaf prediction calculation.
+ *
+ * Only data with examples each of weight 1.0 is supported.
+ *
+ * Class for calculating variance during regression.
+ */
+@Since("2.1")
+private[spark] object ApproxBernoulliImpurity extends Impurity {
+
+  /**
+   * :: DeveloperApi ::
+   * information calculation for multiclass classification
+   * @param counts Array[Double] with counts for each label
+   * @param totalCount sum of counts for all labels
+   * @return information value, or 0 if totalCount = 0
+   */
+  @Since("2.1")
+  @DeveloperApi
+  override def calculate(counts: Array[Double], totalCount: Double): 
Double =
+throw new 
UnsupportedOperationException("ApproxBernoulliImpurity.calculate")
+
+  /**
+   * :: DeveloperApi ::
+   * variance calculation
+   * @param count number of instances
+   * @param sum sum of labels
+   * @param sumSquares summation of squares of the labels
+   * @return information value, or 0 if count = 0
+   */
+  @Since("2.1")
+  @DeveloperApi
+  override def calculate(count: Double, sum: Double, sumSquares: Double): 
Double = {
+Variance.calculate(count, sum, sumSquares)
+  }
+}
+
+/**
+ * Class for updating views of a vector of sufficient statistics,
+ * in order to compute impurity from a sample.
+ * Note: Instances of this class do not hold the data; they operate on 
views of the data.
+ */
+private[spark] class ApproxBernoulliAggregator
+  extends ImpurityAggregator(statsSize = 4) with Serializable {
+
+  /**
+   * Update stats for one (node, feature, bin) with the given label.
+   * @param allStats  Flat stats array, with stats for this (node, 
feature, bin) contiguous.
+   * @param offsetStart index of stats for this (node, feature, bin).
+   */
+  def update(allStats: Array[Double], offset: Int, label: Double, 
instanceWeight: Double): Unit = {
+allStats(offset) += instanceWeight
+allStats(offset + 1) += instanceWeight * label
+allStats(offset + 2) += instanceWeight * label * label
+allStats(offset + 3) += instanceWeight * Math.abs(label)
+  }
+
+  /**
+   * Get an [[ImpurityCalculator]] for a (node, feature, bin).
+   * @param allStats  Flat stats array, with stats for this (node, 
feature, bin) contiguous.
+   * @param offsetStart index of stats for this (node, feature, bin).
+   */
+  def getCalculator(allStats: Array[Double], offset: Int): 
ApproxBernoulliCalculator = {
+new ApproxBernoulliCalculator(allStats.view(offset, offset + 
statsSize).toArray)
+  }
+}
+
+/**
+ * Stores statistics for one (node, feature, bin) for calculating impurity.
+ * Unlike [[ImpurityAggregator]], this class stores its own data and is 
for a specific
+ * (node, feature, bin).
+ * @param stats  Array of sufficient statistics for a (node, feature, bin).
+ */
+private[spark] class ApproxBernoulliCalculator(stats: Array[Double])
+  extends ImpurityCalculator(stats) {
+
+  require(stats.length == 4,
+s"ApproxBernoulliCalculator requires sufficient statistics array stats 
to be of length 4," +
+  s" but was given array of length ${stats.length}.")
+
+  /**
+

[GitHub] spark issue #16495: SPARK-16920: Add a stress test for evaluateEachIteration...

2017-02-04 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/16495
  
Yes, sorry for my wording. A unit test is indeed an inappropriate place for 
stress tests. An offline test would be sufficient to verify that an O(N) 
implementation is an improvement over the O(N^2) one. Ideally the stress test 
would be neatly described in the PR message so that anyone could replicate, 
perhaps even linking to a gist with the script to run it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-11-01 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@jkbradley There seems to be more issues with deprecating impurity:

[error] [warn] 
/home/jenkins/workspace/SparkPullRequestBuilder/mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala:114:
 method setImpurity overrides concrete, non-deprecated symbol(s):setImpurity
[error] [warn]   override def setImpurity(value: String): this.type = 
super.setImpurity(value)
[error] [warn] 
[error] [warn] 
/home/jenkins/workspace/SparkPullRequestBuilder/mllib/src/main/scala/org/apache/spark/ml/regression/GBTRegressor.scala:111:
 method setImpurity overrides concrete, non-deprecated symbol(s):setImpurity
[error] [warn]   override def setImpurity(value: String): this.type = 
super.setImpurity(value)
[error] [warn]

The shared superclass for GBT* (Tree*Params) can't have setImpurity 
deprecated because it's shared with derived classes that should allow 
impurity-setting, and therefore can't have the base class method deprecated. I 
find it weird that a derived class can't add a deprecation, though. Why is that 
rule there? Can I disable it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-11-01 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@jkbradley it seems I can only deprecate `setImpurity`: the value can't be 
deprecated since it's used internally, which triggers a fatal warning, and 
getImpurity has scaladoc shared between other classes where it's valid to use. 
In any case, `setImpurity` is the only one that needs to have the warning.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-15 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@sethah You raise good points.

Regarding (1), I don't know if it is actually true. I don't want to speak 
for @jkbradley, but I was just going off of "software engineering intuition" 
about backwards capability of the algorithm's behavior. But let's consider an 
analogous example - if LogisticRegression was using regular batch GD, and we 
moved it to L-BFGS, it wouldn't make much sense to offer a new option for "gd".

I think the question is whether reverting to original behavior is common 
enough to merit a larger, more clunky, and more confusing API. And as the 
notion of "original" will be changing over time, I'm starting to see the 
attractiveness of @sethah's original proposition to get rid of this option 
entirely, and let us do whatever we want under the hood impurity-wise.

**TL; DR:** I can see at no point a data scientist saying "you know what 
will help my l1 error? A mean predictor!"

The strongest point in favor of this that comes to me is the following: 
people who would be changing the impurity metric are going to be people who are 
working on a GBT model tuning; but there's no good reason to use variance-based 
impurity with mean predictions for a loss that isn't optimized by those 
changes! Any model tuning which would, in some way or another, be checking 
`.setImpurity("variance")` vs `.setImpurity("loss-based")` that happens to show 
that you do better when choosing variance with CV, then all you've done is grid 
search on GBT model parameters to overfit to noise in your data.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-13 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r83285393
  
--- Diff: python/pyspark/ml/regression.py ---
@@ -1003,20 +1003,20 @@ class GBTRegressor(JavaEstimator, HasFeaturesCol, 
HasLabelCol, HasPredictionCol,
 def __init__(self, featuresCol="features", labelCol="label", 
predictionCol="prediction",
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-13 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r83278154
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/util/GBTSuiteHelper.scala ---
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.util
+
+import scala.collection.mutable.ArrayBuffer
+
+import org.scalactic.TolerantNumerics
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.classification.GBTClassifier
+import org.apache.spark.ml.feature.LabeledPoint
+import org.apache.spark.ml.linalg._
+import org.apache.spark.ml.regression.GBTRegressor
+import org.apache.spark.ml.tree.{InternalNode, LeafNode, Node}
+import org.apache.spark.mllib.tree.impurity.{ImpurityAggregator, 
ImpurityCalculator}
+import org.apache.spark.mllib.tree.loss.Loss
+import org.apache.spark.sql._
+import org.apache.spark.sql.functions._
+
+object GBTSuiteHelper extends SparkFunSuite {
+  implicit val approxEquals = TolerantNumerics.tolerantDoubleEquality(1e-3)
+
+  /**
+   * @param labels set of GBT labels
+   * @param agg the aggregator to use
+   * @return the calculator from aggregation on the labels
+   */
+  def computeCalculator(labels: Seq[Double],
+agg: ImpurityAggregator): ImpurityCalculator = {
+implicit val encoder = Encoders.scalaDouble
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-13 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r83278169
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/util/GBTSuiteHelper.scala ---
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.util
+
+import scala.collection.mutable.ArrayBuffer
+
+import org.scalactic.TolerantNumerics
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.classification.GBTClassifier
+import org.apache.spark.ml.feature.LabeledPoint
+import org.apache.spark.ml.linalg._
+import org.apache.spark.ml.regression.GBTRegressor
+import org.apache.spark.ml.tree.{InternalNode, LeafNode, Node}
+import org.apache.spark.mllib.tree.impurity.{ImpurityAggregator, 
ImpurityCalculator}
+import org.apache.spark.mllib.tree.loss.Loss
+import org.apache.spark.sql._
+import org.apache.spark.sql.functions._
+
+object GBTSuiteHelper extends SparkFunSuite {
+  implicit val approxEquals = TolerantNumerics.tolerantDoubleEquality(1e-3)
+
+  /**
+   * @param labels set of GBT labels
+   * @param agg the aggregator to use
+   * @return the calculator from aggregation on the labels
+   */
+  def computeCalculator(labels: Seq[Double],
--- End diff --

done



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-13 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r83277784
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/util/GBTSuiteHelper.scala ---
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.util
+
+import scala.collection.mutable.ArrayBuffer
+
+import org.scalactic.TolerantNumerics
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.classification.GBTClassifier
+import org.apache.spark.ml.feature.LabeledPoint
+import org.apache.spark.ml.linalg._
+import org.apache.spark.ml.regression.GBTRegressor
+import org.apache.spark.ml.tree.{InternalNode, LeafNode, Node}
+import org.apache.spark.mllib.tree.impurity.{ImpurityAggregator, 
ImpurityCalculator}
+import org.apache.spark.mllib.tree.loss.Loss
+import org.apache.spark.sql._
+import org.apache.spark.sql.functions._
+
+object GBTSuiteHelper extends SparkFunSuite {
+  implicit val approxEquals = TolerantNumerics.tolerantDoubleEquality(1e-3)
--- End diff --

Neat!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-13 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r83277146
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/classification/GBTClassifierSuite.scala
 ---
@@ -223,15 +278,18 @@ private object GBTClassifierSuite extends 
SparkFunSuite {
   /**
* Train 2 models on the given dataset, one using the old API and one 
using the new API.
* Convert the old model to the new format, compare them, and fail if 
they are not exactly equal.
+   *
+   * The old API only supports variance-based impurity, so gbt should have 
that setting.
*/
   def compareAPIs(
   data: RDD[LabeledPoint],
   validationData: Option[RDD[LabeledPoint]],
   gbt: GBTClassifier,
   categoricalFeatures: Map[Int, Int]): Unit = {
+assert(gbt.getImpurity == "variance")
--- End diff --

Done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-13 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@jkbradley Re test scripts:

`res8: Double = 0.5193104784040287` is the value outputted by `counts.max / 
counts.sum`. Indeed, it's just a sanity check that the value isn't 1 - i.e., we 
don't have a model that just makes everything a 1 or 0.

Also, indeed, I chose minimum observations in Spark in accordance with 
`gbm`'s default `n.minobsinnode = 10`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-10-09 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@setah do you have any opinion on "loss-based" vs. "auto" or @jkbradley do 
you feel strongly about this? I think the trade-off is between being explicit 
vs. possibly confusing the user. I prefer being explicit.

@setah one important thing to note is that option 2 is only strictly better 
than 1 if we have converged to an optimal terminal node prediction. In log 
loss, for example, I only take a single NR step per Friedman. 

Finally, apologies to everyone for the delay. I've had some deadlines at 
school and am currently traveling, but should be able to address comments when 
I get back.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@jkbradley I addressed your comments (will be pushing new version after 
tests run), but I didn't understand what you were referring to in the "test 
gists" comment. Would you mind clarifying?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78485087
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -465,33 +497,64 @@ private[ml] trait GBTParams extends 
TreeEnsembleParams with HasMaxIter with HasS
 }
 
 private[ml] object GBTClassifierParams {
-  // The losses below should be lowercase.
-  /** Accessor for supported loss settings: logistic */
-  final val supportedLossTypes: Array[String] = 
Array("logistic").map(_.toLowerCase)
+  // The values below should be lowercase.
+  /** Accessor for supported loss settings: logistic, bernoulli */
+  final val supportedLossTypes: Array[String] = Array("logistic", 
"bernoulli")
+  /** Accessor for support entropy settings: loss-based or variance */
+  final val supportedImpurities: Array[String] = Array("loss-based", 
"variance")
+  final def getLossBasedImpurity(loss: String): OldImpurity = loss match {
+case "logistic" | "bernoulli" => ApproxBernoulliImpurity
+case _ => throw new RuntimeException(
+  s"GBTClassifier does not have loss-based impurity for loss ${loss}")
+  }
 }
 
 private[ml] trait GBTClassifierParams extends GBTParams with 
TreeClassifierParams {
 
   /**
+   * Criterion used for information gain calculation (case-insensitive).
+   * Also used for terminal leaf value prediction.
+   * Supported: "loss-based" (default) and "variance"
+   *
+   * @group param
+   */
+  override val impurity: Param[String] = new Param[String](this, 
"impurity", "Criterion used for" +
+" information gain calculation (case-insensitive). Supported options:" 
+
+s" ${GBTClassifierParams.supportedImpurities.mkString(", ")}",
+(value: String) => 
GBTClassifierParams.supportedImpurities.contains(value.toLowerCase))
+
+  /** (private[ml]) convert new impurity to old impurity */
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78485063
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -465,33 +497,64 @@ private[ml] trait GBTParams extends 
TreeEnsembleParams with HasMaxIter with HasS
 }
 
 private[ml] object GBTClassifierParams {
-  // The losses below should be lowercase.
-  /** Accessor for supported loss settings: logistic */
-  final val supportedLossTypes: Array[String] = 
Array("logistic").map(_.toLowerCase)
+  // The values below should be lowercase.
+  /** Accessor for supported loss settings: logistic, bernoulli */
+  final val supportedLossTypes: Array[String] = Array("logistic", 
"bernoulli")
+  /** Accessor for support entropy settings: loss-based or variance */
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78485041
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -501,36 +564,75 @@ private[ml] trait GBTClassifierParams extends 
GBTParams with TreeClassifierParam
 
 private[ml] object GBTRegressorParams {
   // The losses below should be lowercase.
-  /** Accessor for supported loss settings: squared (L2), absolute (L1) */
-  final val supportedLossTypes: Array[String] = Array("squared", 
"absolute").map(_.toLowerCase)
+  /** Accessor for supported loss settings: squared (L2), absolute (L1), 
gaussian (squared),
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484998
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -465,33 +497,64 @@ private[ml] trait GBTParams extends 
TreeEnsembleParams with HasMaxIter with HasS
 }
 
 private[ml] object GBTClassifierParams {
-  // The losses below should be lowercase.
-  /** Accessor for supported loss settings: logistic */
-  final val supportedLossTypes: Array[String] = 
Array("logistic").map(_.toLowerCase)
+  // The values below should be lowercase.
+  /** Accessor for supported loss settings: logistic, bernoulli */
+  final val supportedLossTypes: Array[String] = Array("logistic", 
"bernoulli")
+  /** Accessor for support entropy settings: loss-based or variance */
+  final val supportedImpurities: Array[String] = Array("loss-based", 
"variance")
+  final def getLossBasedImpurity(loss: String): OldImpurity = loss match {
+case "logistic" | "bernoulli" => ApproxBernoulliImpurity
+case _ => throw new RuntimeException(
+  s"GBTClassifier does not have loss-based impurity for loss ${loss}")
+  }
 }
 
 private[ml] trait GBTClassifierParams extends GBTParams with 
TreeClassifierParams {
 
   /**
+   * Criterion used for information gain calculation (case-insensitive).
+   * Also used for terminal leaf value prediction.
+   * Supported: "loss-based" (default) and "variance"
+   *
+   * @group param
+   */
+  override val impurity: Param[String] = new Param[String](this, 
"impurity", "Criterion used for" +
+" information gain calculation (case-insensitive). Supported options:" 
+
+s" ${GBTClassifierParams.supportedImpurities.mkString(", ")}",
+(value: String) => 
GBTClassifierParams.supportedImpurities.contains(value.toLowerCase))
+
+  /** (private[ml]) convert new impurity to old impurity */
+  override private[ml] def getOldImpurity: OldImpurity = {
+getImpurity match {
+  case "loss-based" => 
GBTClassifierParams.getLossBasedImpurity($(lossType))
+  case "variance" => OldVariance
+  case _ =>
+// Should never happen because of check in setter method.
+throw new RuntimeException(
+  s"GBTClassifier was given unrecognized impurity: $impurity")
+}
+  }
+
+  /**
* Loss function which GBT tries to minimize. (case-insensitive)
-   * Supported: "logistic"
-   * (default = logistic)
+   * Supported: "bernoulli" (default), "logistic" (alias for "bernoulli")
+   *
* @group param
*/
   val lossType: Param[String] = new Param[String](this, "lossType", "Loss 
function which GBT" +
 " tries to minimize (case-insensitive). Supported options:" +
 s" ${GBTClassifierParams.supportedLossTypes.mkString(", ")}",
 (value: String) => 
GBTClassifierParams.supportedLossTypes.contains(value.toLowerCase))
 
-  setDefault(lossType -> "logistic")
+  setDefault(lossType -> "bernoulli", impurity -> "loss-based")
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484752
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -220,32 +222,42 @@ private[ml] object TreeClassifierParams {
   final val supportedImpurities: Array[String] = Array("entropy", 
"gini").map(_.toLowerCase)
 }
 
+private[ml] trait TreeClassifierParamsWithDefault extends 
TreeClassifierParams {
+  /**
+   * Criterion used for information gain calculation (case-insensitive).
+   * Also used for terminal leaf value prediction.
+   * Supported: "gini" (default) and "entropy"
+   *
+   * @group param
+   */
+  override val impurity: Param[String] = new Param[String](this, 
"impurity", "Criterion used for" +
+" information gain calculation (case-insensitive). Supported options:" 
+
+s" ${TreeClassifierParams.supportedImpurities.mkString(", ")}",
+(value: String) => 
TreeClassifierParams.supportedImpurities.contains(value.toLowerCase))
+
+  setDefault(impurity -> "gini")
+}
+
 private[ml] trait DecisionTreeClassifierParams
-  extends DecisionTreeParams with TreeClassifierParams
+  extends DecisionTreeParams with TreeClassifierParamsWithDefault
 
 /**
  * Parameters for Decision Tree-based regression algorithms.
  */
 private[ml] trait TreeRegressorParams extends Params {
 
+  // Impurity should be overriden when setting a default. This should be a 
def, but has
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484742
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -183,24 +191,18 @@ private[ml] trait DecisionTreeParams extends 
PredictorParams
  */
 private[ml] trait TreeClassifierParams extends Params {
 
+  // Impurity should be overriden when setting a default. This should be a 
def, but has
+  // to be a val to maintain the proper documentation.
   /**
-   * Criterion used for information gain calculation (case-insensitive).
-   * Supported: "entropy" and "gini".
-   * (default = gini)
* @group param
*/
-  final val impurity: Param[String] = new Param[String](this, "impurity", 
"Criterion used for" +
-" information gain calculation (case-insensitive). Supported options:" 
+
-s" ${TreeClassifierParams.supportedImpurities.mkString(", ")}",
-(value: String) => 
TreeClassifierParams.supportedImpurities.contains(value.toLowerCase))
-
-  setDefault(impurity -> "gini")
+  val impurity: Param[String] = new Param(this, "", "")
 
   /** @group setParam */
   def setImpurity(value: String): this.type = set(impurity, value)
 
   /** @group getParam */
-  final def getImpurity: String = $(impurity).toLowerCase
+  def getImpurity: String = $(impurity).toLowerCase
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484674
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -183,24 +191,18 @@ private[ml] trait DecisionTreeParams extends 
PredictorParams
  */
 private[ml] trait TreeClassifierParams extends Params {
 
+  // Impurity should be overriden when setting a default. This should be a 
def, but has
+  // to be a val to maintain the proper documentation.
   /**
-   * Criterion used for information gain calculation (case-insensitive).
-   * Supported: "entropy" and "gini".
-   * (default = gini)
* @group param
*/
-  final val impurity: Param[String] = new Param[String](this, "impurity", 
"Criterion used for" +
-" information gain calculation (case-insensitive). Supported options:" 
+
-s" ${TreeClassifierParams.supportedImpurities.mkString(", ")}",
-(value: String) => 
TreeClassifierParams.supportedImpurities.contains(value.toLowerCase))
-
-  setDefault(impurity -> "gini")
+  val impurity: Param[String] = new Param(this, "", "")
--- End diff --

right, that's the point... we have loss-based default for gbts, in contrast 
to other trees


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484559
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impurity/ApproxBernoulliImpurity.scala
 ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.tree.impurity
+
+import org.apache.spark.annotation.{DeveloperApi, Since}
+import org.apache.spark.mllib.tree.impurity._
+
+/**
+ * [[ApproxBernoulliImpurity]] currently uses variance as a (proxy) 
impurity measure
+ * during tree construction. The main purpose of the class is to have an 
alternative
+ * leaf prediction calculation.
+ *
+ * Only data with examples each of weight 1.0 is supported.
+ *
+ * Class for calculating variance during regression.
+ */
+@Since("2.1")
+object ApproxBernoulliImpurity extends Impurity {
+
+  /**
+   * :: DeveloperApi ::
+   * information calculation for multiclass classification
+   * @param counts Array[Double] with counts for each label
+   * @param totalCount sum of counts for all labels
+   * @return information value, or 0 if totalCount = 0
+   */
+  @Since("2.1")
+  @DeveloperApi
+  override def calculate(counts: Array[Double], totalCount: Double): 
Double =
+throw new 
UnsupportedOperationException("ApproxBernoulliImpurity.calculate")
+
+  /**
+   * :: DeveloperApi ::
+   * variance calculation
+   * @param count number of instances
+   * @param sum sum of labels
+   * @param sumSquares summation of squares of the labels
+   * @return information value, or 0 if count = 0
+   */
+  @Since("2.1")
+  @DeveloperApi
+  override def calculate(count: Double, sum: Double, sumSquares: Double): 
Double = {
+Variance.calculate(count, sum, sumSquares)
+  }
+
+  /**
+   * Get this impurity instance.
+   * This is useful for passing impurity parameters to a Strategy in Java.
+   */
+  @Since("2.1")
+  def instance: this.type = this
+}
+
+/**
+ * Class for updating views of a vector of sufficient statistics,
+ * in order to compute impurity from a sample.
+ * Note: Instances of this class do not hold the data; they operate on 
views of the data.
+ */
+private[spark] class ApproxBernoulliAggregator
+  extends ImpurityAggregator(statsSize = 4) with Serializable {
+
+  /**
+   * Update stats for one (node, feature, bin) with the given label.
+   * @param allStats  Flat stats array, with stats for this (node, 
feature, bin) contiguous.
+   * @param offsetStart index of stats for this (node, feature, bin).
+   */
+  def update(allStats: Array[Double], offset: Int, label: Double, 
instanceWeight: Double): Unit = {
+allStats(offset) += instanceWeight
+allStats(offset + 1) += instanceWeight * label
+allStats(offset + 2) += instanceWeight * label * label
+allStats(offset + 3) += instanceWeight * Math.abs(label)
+  }
+
+  /**
+   * Get an [[ImpurityCalculator]] for a (node, feature, bin).
+   * @param allStats  Flat stats array, with stats for this (node, 
feature, bin) contiguous.
+   * @param offsetStart index of stats for this (node, feature, bin).
+   */
+  def getCalculator(allStats: Array[Double], offset: Int): 
ApproxBernoulliCalculator = {
+new ApproxBernoulliCalculator(allStats.view(offset, offset + 
statsSize).toArray)
+  }
+}
+
+/**
+ * Stores statistics for one (node, feature, bin) for calculating impurity.
+ * Unlike [[ImpurityAggregator]], this class stores its own data and is 
for a specific
+ * (node, feature, bin).
+ * @param stats  Array of sufficient statistics for a (node, feature, bin).
+ */
+private[spark] class ApproxBernoulliCalculator(stats: Array[Double])
+  extends ImpurityCalculator(stats) {
+
+  require(stats.length == 4,
+s"App

[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484507
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/treeParams.scala ---
@@ -183,24 +191,18 @@ private[ml] trait DecisionTreeParams extends 
PredictorParams
  */
 private[ml] trait TreeClassifierParams extends Params {
 
+  // Impurity should be overriden when setting a default. This should be a 
def, but has
--- End diff --

Yes, it shows up as a val


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484182
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impurity/ApproxBernoulliImpurity.scala
 ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.tree.impurity
+
+import org.apache.spark.annotation.{DeveloperApi, Since}
+import org.apache.spark.mllib.tree.impurity._
+
+/**
+ * [[ApproxBernoulliImpurity]] currently uses variance as a (proxy) 
impurity measure
+ * during tree construction. The main purpose of the class is to have an 
alternative
+ * leaf prediction calculation.
+ *
+ * Only data with examples each of weight 1.0 is supported.
+ *
+ * Class for calculating variance during regression.
+ */
+@Since("2.1")
+object ApproxBernoulliImpurity extends Impurity {
+
+  /**
+   * :: DeveloperApi ::
+   * information calculation for multiclass classification
+   * @param counts Array[Double] with counts for each label
+   * @param totalCount sum of counts for all labels
+   * @return information value, or 0 if totalCount = 0
+   */
+  @Since("2.1")
+  @DeveloperApi
+  override def calculate(counts: Array[Double], totalCount: Double): 
Double =
+throw new 
UnsupportedOperationException("ApproxBernoulliImpurity.calculate")
+
+  /**
+   * :: DeveloperApi ::
+   * variance calculation
+   * @param count number of instances
+   * @param sum sum of labels
+   * @param sumSquares summation of squares of the labels
+   * @return information value, or 0 if count = 0
+   */
+  @Since("2.1")
+  @DeveloperApi
+  override def calculate(count: Double, sum: Double, sumSquares: Double): 
Double = {
+Variance.calculate(count, sum, sumSquares)
+  }
+
+  /**
+   * Get this impurity instance.
+   * This is useful for passing impurity parameters to a Strategy in Java.
+   */
+  @Since("2.1")
+  def instance: this.type = this
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484135
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impurity/ApproxBernoulliImpurity.scala
 ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.tree.impurity
+
+import org.apache.spark.annotation.{DeveloperApi, Since}
+import org.apache.spark.mllib.tree.impurity._
+
+/**
+ * [[ApproxBernoulliImpurity]] currently uses variance as a (proxy) 
impurity measure
+ * during tree construction. The main purpose of the class is to have an 
alternative
+ * leaf prediction calculation.
+ *
+ * Only data with examples each of weight 1.0 is supported.
+ *
+ * Class for calculating variance during regression.
+ */
+@Since("2.1")
+object ApproxBernoulliImpurity extends Impurity {
--- End diff --

done (there were no other situations)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78484048
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/GradientBoostedTrees.scala 
---
@@ -258,11 +258,13 @@ private[spark] object GradientBoostedTrees extends 
Logging {
 val baseLearnerWeights = new Array[Double](numIterations)
 val loss = boostingStrategy.loss
 val learningRate = boostingStrategy.learningRate
-// Prepare strategy for individual trees, which use regression with 
variance impurity.
+// Prepare strategy for individual trees, which all use regression.
+// TODO(vlad17): Changing the strategy here is confusing (especially 
using regression for
+// classification). With the resolution of SPARK-16728, this shouldn't 
be necessary.
 val treeStrategy = boostingStrategy.treeStrategy.copy
 val validationTol = boostingStrategy.validationTol
 treeStrategy.algo = OldAlgo.Regression
-treeStrategy.impurity = OldVariance
+treeStrategy.impurity = boostingStrategy.treeStrategy.impurity
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78483990
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/GradientBoostedTrees.scala 
---
@@ -258,11 +258,13 @@ private[spark] object GradientBoostedTrees extends 
Logging {
 val baseLearnerWeights = new Array[Double](numIterations)
 val loss = boostingStrategy.loss
 val learningRate = boostingStrategy.learningRate
-// Prepare strategy for individual trees, which use regression with 
variance impurity.
+// Prepare strategy for individual trees, which all use regression.
+// TODO(vlad17): Changing the strategy here is confusing (especially 
using regression for
--- End diff --

done (also elsewhere).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78483380
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/DTStatsAggregator.scala ---
@@ -33,11 +34,13 @@ private[spark] class DTStatsAggregator(
 
   /**
* [[ImpurityAggregator]] instance specifying the impurity type.
-   */
-  val impurityAggregator: ImpurityAggregator = metadata.impurity match {
+  */
+  private val impurityAggregator: ImpurityAggregator = metadata.impurity 
match {
+// TODO(vlad17): this is a ridiculous coupling. Impurity should have a 
getAggregator() method
--- End diff --

Updated https://issues.apache.org/jira/browse/SPARK-16728
done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78482838
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/GBTRegressor.scala ---
@@ -134,11 +146,15 @@ class GBTRegressor @Since("1.4.0") (@Since("1.4.0") 
override val uid: String)
 
 @Since("1.4.0")
 object GBTRegressor extends DefaultParamsReadable[GBTRegressor] {
-
-  /** Accessor for supported loss settings: squared (L2), absolute (L1) */
+  /** Accessor for supported loss settings: squared (L2), absolute (L1), 
gaussian (squared),
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78482844
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/GBTRegressor.scala ---
@@ -17,13 +17,13 @@
 
 package org.apache.spark.ml.regression
 
-import com.github.fommil.netlib.BLAS.{getInstance => blas}
+import
+com.github.fommil.netlib.BLAS.{getInstance => blas}
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78482842
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/regression/GBTRegressor.scala ---
@@ -38,25 +38,35 @@ import org.apache.spark.sql.{DataFrame, Dataset}
 import org.apache.spark.sql.functions._
 
 /**
- * [[http://en.wikipedia.org/wiki/Gradient_boosting Gradient-Boosted Trees 
(GBTs)]]
+ * Gradient-Boosted Trees (GBTs) 
(http://en.wikipedia.org/wiki/Gradient_boosting)
  * learning algorithm for regression.
  * It supports both continuous and categorical features.
  *
- * The implementation is based upon: J.H. Friedman. "Stochastic Gradient 
Boosting." 1999.
+ * The implemention offers both Stochastic Gradient Boosting, as in J.H. 
Friedman 1999,
+ * "Stochastic Gradient Boosting" and TreeBoost, as in Friedman 1999
+ * "Greedy Function Approximation: A Gradient Boosting Machine"
  *
- * Notes on Gradient Boosting vs. TreeBoost:
- *  - This implementation is for Stochastic Gradient Boosting, not for 
TreeBoost.
+ * Notes on Stochastic Gradient Boosting (SGB) vs. TreeBoost:
+ *  - TreeBoost algorithms are a subset of SGB algorithms.
  *  - Both algorithms learn tree ensembles by minimizing loss functions.
- *  - TreeBoost (Friedman, 1999) additionally modifies the outputs at tree 
leaf nodes
- *based on the loss function, whereas the original gradient boosting 
method does not.
- * - When the loss is SquaredError, these methods give the same 
result, but they could differ
- *   for other loss functions.
- *  - We expect to implement TreeBoost in the future:
- *[https://issues.apache.org/jira/browse/SPARK-4240]
+ *  - TreeBoost has two additional properties that general SGB trees don't:
+ * - The loss function gradients are directly used as an approximate 
impurity measure.
+ * - The value reported at a leaf is given by optimizing the loss 
function is optimized on
+ *   that leaf node's partition of the data, rather than just being 
the mean.
+ *  - In the case of squared error loss, variance impurity and mean leaf 
estimates happen
+ *to make the SGB and TreeBoost algorithms identical.
+ *
+ * [[GBTRegressor]] will use the usual `"variance"` impurity by default, 
conforming to
+ * SGB behavior. For TreeBoost, set impurity to `"loss-based"`. Note 
TreeBoost is currently
+ * incompatible with absolute error.
+ *
+ * Currently, however, even TreeBoost behavior uses variance impurity for 
split selection for
+ * ease and speed. Leaf selection is aligned with theory. This is the 
approach `R`'s
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78482846
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala ---
@@ -148,11 +154,14 @@ class GBTClassifier @Since("1.4.0") (
 
 @Since("1.4.0")
 object GBTClassifier extends DefaultParamsReadable[GBTClassifier] {
-
-  /** Accessor for supported loss settings: logistic */
+  /** Accessor for supported loss settings: logistic, bernoulli */
   @Since("1.4.0")
   final val supportedLossTypes: Array[String] = 
GBTClassifierParams.supportedLossTypes
 
+  /** Accessor for support entropy settings: loss-based or variance */
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78481662
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala ---
@@ -42,18 +42,30 @@ import org.apache.spark.sql.types.DoubleType
 /**
  * Gradient-Boosted Trees (GBTs) 
(http://en.wikipedia.org/wiki/Gradient_boosting)
  * learning algorithm for classification.
- * It supports binary labels, as well as both continuous and categorical 
features.
  * Note: Multiclass labels are not currently supported.
+ * It supports both continuous and categorical features.
  *
- * The implementation is based upon: J.H. Friedman. "Stochastic Gradient 
Boosting." 1999.
+ * The implemention offers both Stochastic Gradient Boosting, as in J.H. 
Friedman 1999,
+ * "Stochastic Gradient Boosting" and TreeBoost, as in Friedman 1999
+ * "Greedy Function Approximation: A Gradient Boosting Machine"
  *
- * Notes on Gradient Boosting vs. TreeBoost:
- *  - This implementation is for Stochastic Gradient Boosting, not for 
TreeBoost.
+ * Notes on Stochastic Gradient Boosting (SGB) vs. TreeBoost:
+ *  - TreeBoost algorithms are a subset of SGB algorithms.
  *  - Both algorithms learn tree ensembles by minimizing loss functions.
- *  - TreeBoost (Friedman, 1999) additionally modifies the outputs at tree 
leaf nodes
- *based on the loss function, whereas the original gradient boosting 
method does not.
- *  - We expect to implement TreeBoost in the future:
- *[https://issues.apache.org/jira/browse/SPARK-4240]
+ *  - TreeBoost has two additional properties that general SGB trees don't:
+ * - The loss function gradients are directly used as an approximate 
impurity measure.
+ * - The value reported at a leaf is given by optimizing the loss 
function is optimized on
+ *   that leaf node's partition of the data, rather than just being 
the mean.
+ *  - In the case of squared error loss, variance impurity and mean leaf 
estimates happen
+ *to make the SGB and TreeBoost algorithms identical.
+ *
+ * [[GBTClassifier]] will use the usual `"loss-based"` impurity by 
default, conforming to
+ * TreeBoost behavior. For SGB, set impurity to `"variance"`.
+ * use of TreeBoost, set impurity to `"loss-based"`.
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-09-12 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14547#discussion_r78481687
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala ---
@@ -42,18 +42,30 @@ import org.apache.spark.sql.types.DoubleType
 /**
  * Gradient-Boosted Trees (GBTs) 
(http://en.wikipedia.org/wiki/Gradient_boosting)
  * learning algorithm for classification.
- * It supports binary labels, as well as both continuous and categorical 
features.
  * Note: Multiclass labels are not currently supported.
+ * It supports both continuous and categorical features.
  *
- * The implementation is based upon: J.H. Friedman. "Stochastic Gradient 
Boosting." 1999.
+ * The implemention offers both Stochastic Gradient Boosting, as in J.H. 
Friedman 1999,
+ * "Stochastic Gradient Boosting" and TreeBoost, as in Friedman 1999
+ * "Greedy Function Approximation: A Gradient Boosting Machine"
  *
- * Notes on Gradient Boosting vs. TreeBoost:
- *  - This implementation is for Stochastic Gradient Boosting, not for 
TreeBoost.
+ * Notes on Stochastic Gradient Boosting (SGB) vs. TreeBoost:
+ *  - TreeBoost algorithms are a subset of SGB algorithms.
  *  - Both algorithms learn tree ensembles by minimizing loss functions.
- *  - TreeBoost (Friedman, 1999) additionally modifies the outputs at tree 
leaf nodes
- *based on the loss function, whereas the original gradient boosting 
method does not.
- *  - We expect to implement TreeBoost in the future:
- *[https://issues.apache.org/jira/browse/SPARK-4240]
+ *  - TreeBoost has two additional properties that general SGB trees don't:
+ * - The loss function gradients are directly used as an approximate 
impurity measure.
+ * - The value reported at a leaf is given by optimizing the loss 
function is optimized on
+ *   that leaf node's partition of the data, rather than just being 
the mean.
+ *  - In the case of squared error loss, variance impurity and mean leaf 
estimates happen
+ *to make the SGB and TreeBoost algorithms identical.
+ *
+ * [[GBTClassifier]] will use the usual `"loss-based"` impurity by 
default, conforming to
+ * TreeBoost behavior. For SGB, set impurity to `"variance"`.
+ * use of TreeBoost, set impurity to `"loss-based"`.
+ *
+ * Currently, however, even TreeBoost behavior uses variance impurity for 
split selection for
+ * ease and speed. Leaf selection is aligned with theory. This is the 
approach `R`'s
--- End diff --

done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost

2016-08-19 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@sethah Was that coupling not already there beforehand? I didn't really 
change any of the implementation class' interfaces, I just added the Bernoulli 
impurity to the existing Impurity framework, which itself couples the Impurity 
class with the ImpurityAggregator, that necessarily returns an 
ImpurityCalculator, which makes the prediction. It seems like the existing 
design is already doing the coupling.

As for the interface making this coupling explicit, yes, I completely agree 
I'm doing that. But I think this is a good thing.
1. The coupled loss functions / splitting impurity is the whole point of 
tree boost. The papers themselves say to construct intermediate trees to 
minimize loss. They only offer using other impurity measures for ease of 
implementation. XGBoost, for instance, splits on (an approximation of) the 
losses directly.
2. The fact that the underlying impurity/predictions are all done by the 
same class (though not my choice), is also probably better from an 
implementation perspective. Both need to gather summary statistics about each 
leaf's partition of the data, so it's easiest to just do it in one place.
3. I don't think we're giving up the "Decoupled version" either. If we so 
choose to in the future, setting impurity to "variance" but loss function to 
"absolute" can use a new ImpurityAggregator that offers the variance for 
splitting but median for predicting.

My goal with this PR was to make as minimal a change as possible (it's 
mostly an API change introducing the loss-based impurity, which also makes 
loss-based terminal node predictions). I'm not trying to change the GBT design 
here at all (though if it appears to be the case because of something I'm 
missing, please let me know).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost [WIP]

2016-08-08 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@sethah Thanks for the FYI. I'm pretty confident that it'll help since now 
we're directly optimizing the loss function. However, it would be nice to prove 
this. Unfortunately, the example I linked above uses a skewed dataset.

The only estimator whose behavior changed is GBTClassifier (now the 
bernoulli predictions use an NR step rather than guess the mean). And since the 
raw prediction column is unavailable for the GBTClassifier, I can't really 
compare the classifiers sensibly on skewed datasets since AUC is out of the 
question.

I'm going to have to spend some time trying to find a "real" dataset that's 
not skewed but large enough to be meaningful or just make an artificial one. 
And also spark-perf will need to be re-run.

Also, regarding the binary incompatibility failure - part of that was my 
fault, part of it was due to an incompatibility with a package-private method. 
I added an exception for the binary incompatibility for the package-private 
method - is that OK?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14547: [SPARK-16718][MLlib] gbm-style treeboost [WIP]

2016-08-08 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/14547
  
@hhbyyh Would you mind reviewing this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14547: [SPARK-16718][MLlib] gbm-style treeboost [WIP]

2016-08-08 Thread vlad17
GitHub user vlad17 opened a pull request:

https://github.com/apache/spark/pull/14547

[SPARK-16718][MLlib] gbm-style treeboost [WIP]

## What changes were proposed in this pull request?

This change adds TreeBoost functionality to `GBTClassifer` and 
`GBTRegressor`. The main change is that leaf nodes now make a prediction which 
optimizes the loss function, rather than always using the mean label (which is 
only optimal in the case of variance-based impurity).

This changes the defaults to use the loss-based impurity rather than the 
required variance.

I made this change only for L2 loss and logistic loss (adding some aliases 
to the names as well for parity with R's implementation, GBM). These two 
functions have leaf predictions that can be computed within the framework of 
the current impurity API. Other loss functions will require API modification, 
which should be its own change, SPARK-16728.

Note that because loss-based impurity with L1 loss is NOT supported, code 
that only sets default impurity and L1 loss will now throw (impurity should be 
variance, explicitly).

## How was this patch tested?

Unit testing for correctness: I tested defaults parameter values and new 
settings for the parameters.

[WIP] For accuracy, I'm currently comparing the performance on a [real-life 
dataset](https://www.datarobot.com/blog/r-getting-started-with-data-science/) 
between Spark and GBM. I will upload the results once I have them.
[WIP] This code shouldn't introduce any regressions, but it would be nice 
to make sure. I'm waiting for @sethah to respond on [his previous 
PR](https://github.com/apache/spark/commit/dafd70fbfe70702502ef198f2a8f529ef7557592)
 so that he can make his benchmarking script available to me.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vlad17/spark GBT-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14547.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14547


commit 6c7c60b581464be13b44aa43d2c402501fdb0505
Author: Vladimir Feinberg 
Date:   2016-07-22T01:01:58Z

Added new documentation for TreeBoost, top-level calls

commit a4c050675bc524b742cb9fc3703ce5105cabdd8a
Author: Vladimir Feinberg 
Date:   2016-07-22T19:55:10Z

Implemented ApproxBernoulliImpurity

commit 5a38e0c1b284423f3129c4edbacece562fb675a3
Author: Vladimir Feinberg 
Date:   2016-07-25T22:59:19Z

Added approximate Bernoulli impurity (L_2 treeboost)

commit 759d1aa1a20c1679fba212c3017e200d386fa6da
Author: Vladimir Feinberg 
Date:   2016-07-26T00:21:22Z

Added marker saying Laplace Impurity is not yet supported (requires 
internal API change)

commit e027d6dedd928e96dc7c99dc699d9f7c374034a3
Author: Vladimir Feinberg 
Date:   2016-07-26T00:26:29Z

Updated docs to reflect lack of L1 impurity support

commit 15575a13c0ad4f2567bcccdcbcb134a9ca548d9c
Author: Vladimir Feinberg 
Date:   2016-07-26T00:41:00Z

Fixed urls

commit 7c7d804dc3c614984e863aae9ef8ffc8f9ec3117
Author: Vladimir Feinberg 
Date:   2016-07-26T00:43:46Z

Removed ApproxLaplaceImpurity

commit 44a58efe4b0b1bd69eaadc5dc17676194b949888
Author: Vladimir Feinberg 
Date:   2016-07-26T00:50:50Z

Fix reader docs

commit b362c3852c0e17783b08a9c9a97e1abb66ef5c9f
Author: Vladimir Feinberg 
Date:   2016-07-26T23:43:41Z

Fixed a bunch of bugs + tested wrt old behavior

commit f31903c228c164313c2f0cb22fac8b81e6a1
Author: Vladimir Feinberg 
Date:   2016-07-27T00:47:51Z

Completed tests for reading/writing new impurities

commit 01eae2ae967fdbe89b0ecd440216e54431d51d3d
Author: Vladimir Feinberg 
Date:   2016-07-27T17:15:05Z

Finished tests

commit bd189e2aae27266314b16f0dffc3ce7a230d4e27
Author: Vladimir Feinberg 
Date:   2016-08-06T23:16:18Z

Added R's gbm as a direct comparison to GBTClassifier

commit 704864354619581f1f5bb43489c5e2ee9ec89487
Author: Vladimir Feinberg 
Date:   2016-08-07T00:20:35Z

Got rid of direct R comparison

commit a0a8fcddefa122682c579b567524cbcf2b00251c
Author: Vladimir Feinberg 
Date:   2016-08-08T06:18:14Z

Direct behavior-checking test (for GBTClassifier)

commit c050586e7db6eed41f5b8ddf1e245b13be2c8994
Author: Vladimir Feinberg 
Date:   2016-08-08T20:44:42Z

Added analogous test for GBTReressor

commit 7e39ada3acf431c171adfca0603279002ff20153
Author: Vladimir Feinberg 
Date:   2016-08-08T21:03:47Z

Cleaned up style-related things




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file

[GitHub] spark pull request #11443: [SPARK-13244][SQL] Migrates DataFrame to Dataset

2016-08-02 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/11443#discussion_r73192952
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -745,6 +825,80 @@ class DataFrame private[sql](
   }
 
   /**
+   * Returns a new [[Dataset]] by computing the given [[Column]] 
expression for each element.
+   *
+   * {{{
+   *   val ds = Seq(1, 2, 3).toDS()
+   *   val newDS = ds.select(expr("value + 1").as[Int])
+   * }}}
+   * @since 1.6.0
+   */
+  def select[U1: Encoder](c1: TypedColumn[T, U1]): Dataset[U1] = {
--- End diff --

@liancheng Yup, I suppose that's working as expected then. It's a bit 
confusing since aggregator has an implicitcastinputtypes mixin.

Perhaps it would be better for c1 to be TypedColumn[_, U1]?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #11443: [SPARK-13244][SQL] Migrates DataFrame to Dataset

2016-08-01 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/11443
  
There may be small bug introduced here. Please see my comment inline:

https://github.com/apache/spark/pull/11443/files#diff-c3d0394b2fc08fb2842ff0362a5ac6c9R836


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11443: [SPARK-13244][SQL] Migrates DataFrame to Dataset

2016-08-01 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/11443#discussion_r73048593
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -745,6 +825,80 @@ class DataFrame private[sql](
   }
 
   /**
+   * Returns a new [[Dataset]] by computing the given [[Column]] 
expression for each element.
+   *
+   * {{{
+   *   val ds = Seq(1, 2, 3).toDS()
+   *   val newDS = ds.select(expr("value + 1").as[Int])
+   * }}}
+   * @since 1.6.0
+   */
+  def select[U1: Encoder](c1: TypedColumn[T, U1]): Dataset[U1] = {
--- End diff --

I don't think this is ever called. `select(Column*)` will always be 
preferred:

https://gist.github.com/vlad17/93f1cb57aad42eb7de33f92d6282a44f


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of Python-o...

2016-07-07 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/13778
  
LGTM +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-07-06 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r69758782
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 ---
@@ -374,13 +407,15 @@ object MapObjects {
  * @param lambdaFunction A function that take the `loopVar` as input, and 
used as lambda function
  *   to handle collection elements.
  * @param inputData An expression that when evaluated returns a collection 
object.
+ * @param inputDataType The dataType of inputData.
--- End diff --

Document that it's an optional and say default behavior is to use the 
resolved .dataType of inputData by default.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-29 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r68979443
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 ---
@@ -346,14 +346,38 @@ case class LambdaVariable(value: String, isNull: 
String, dataType: DataType) ext
 object MapObjects {
   private val curId = new java.util.concurrent.atomic.AtomicInteger()
 
+  /**
+   * Construct an instance of MapObjects case class.
+   *
+   * @param function The function applied on the collection elements.
+   * @param inputData An expression that when evaluated returns a 
collection object.
+   * @param elementType The data type of elements in the collection.
+   */
   def apply(
   function: Expression => Expression,
   inputData: Expression,
   elementType: DataType): MapObjects = {
+apply(function, inputData, elementType, inputData.dataType)
+  }
+
+  /**
+   * Construct an instance of MapObjects case class.
+   *
+   * @param function The function applied on the collection elements.
+   * @param inputData An expression that when evaluated returns a 
collection object.
+   * @param elementType The data type of elements in the collection.
+   * @param inputDataType The explicitly given data type of inputData to 
override the
+   *  data type inferred from inputData (i.e., 
inputData.dataType).
--- End diff --

Would you mind adding in the documentation why this would ever be necessary 
(i.e., in the array of python UDT case, what's wrong with inputData.dataType)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-28 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r68789054
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 ---
@@ -349,11 +349,12 @@ object MapObjects {
   def apply(
   function: Expression => Expression,
   inputData: Expression,
-  elementType: DataType): MapObjects = {
+  elementType: DataType,
+  inputDataType: Option[DataType] = None): MapObjects = {
--- End diff --

I've got quite a few problems with this default:
1. It looks like it's here to avoid coding work in most places, not because 
it's an obvious value for apply() to take on
2. The additional parameter is completely undocumented and call sites have 
no mention of it.

Also, the use of option in the case class constructor is a bit obtuse.

Here is my suggestion:
1. Remove the default.
2. If the option in apply() is None, then pass 
inputDataType.getOrElse(inputData.dataType) as the inputDataType : DataType to 
the case class constructor, which uses the parameter data type without any 
hidden logic (as it does now).
3. Document the fact that supplying None in apply() triggers this kind of 
inference.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-24 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r68416995
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 ---
@@ -427,8 +427,12 @@ case class MapObjects private(
   case _ => ""
 }
 
+val inputDataType = inputData.dataType match {
+  case p: PythonUserDefinedType => p.sqlType
--- End diff --

This probably needs a comment explaining the reasoning behind the code. 
Also, why are we allowed to use p.sqlType for inputDataType serialization in 
the case of a python UDT? The UDT's deserialize() still has to be called for 
each element of the array, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-21 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r67894974
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -558,6 +558,18 @@ def check_datatype(datatype):
 _verify_type(PythonOnlyPoint(1.0, 2.0), PythonOnlyUDT())
 self.assertRaises(ValueError, lambda: _verify_type([1.0, 2.0], 
PythonOnlyUDT()))
 
+schema = StructType().add("key", LongType()).add("val", 
PythonOnlyUDT())
+df = self.spark.createDataFrame(
+[(i % 3, PythonOnlyPoint(float(i), float(i))) for i in 
range(10)],
+schema=schema)
+df.show()
+
+schema = StructType().add("key", LongType()).add("val", 
ArrayType(PythonOnlyUDT()))
+df = self.spark.createDataFrame(
+[(i % 3, [PythonOnlyPoint(float(i), float(i))]) for i in 
range(10)],
+schema=schema)
+df.show()
+
--- End diff --

Missing a unit test for "counterexample 2", nested complex udt struct with 
mixed types: ArrayType(StructType(simple sql type, udt))


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-21 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r67894797
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -558,6 +558,18 @@ def check_datatype(datatype):
 _verify_type(PythonOnlyPoint(1.0, 2.0), PythonOnlyUDT())
 self.assertRaises(ValueError, lambda: _verify_type([1.0, 2.0], 
PythonOnlyUDT()))
 
+schema = StructType().add("key", LongType()).add("val", 
PythonOnlyUDT())
+df = self.spark.createDataFrame(
+[(i % 3, PythonOnlyPoint(float(i), float(i))) for i in 
range(10)],
+schema=schema)
+df.show()
+
+schema = StructType().add("key", LongType()).add("val", 
ArrayType(PythonOnlyUDT()))
--- End diff --

Same here, need its own named unit test (so that it's easier to identify 
the problem if the test fails) - unit tests should test only one thing, the 
thing tested here is `test_nested_udt_in_df` (perhaps also worthwhile to check 
Map works?)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-21 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r67894457
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -558,6 +558,18 @@ def check_datatype(datatype):
 _verify_type(PythonOnlyPoint(1.0, 2.0), PythonOnlyUDT())
 self.assertRaises(ValueError, lambda: _verify_type([1.0, 2.0], 
PythonOnlyUDT()))
 
+schema = StructType().add("key", LongType()).add("val", 
PythonOnlyUDT())
--- End diff --

This should be in its own test method - it's no longer merely a `test_udt` 
but rather a `test_simple_udt_in_df`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of P...

2016-06-21 Thread vlad17
Github user vlad17 commented on a diff in the pull request:

https://github.com/apache/spark/pull/13778#discussion_r67894162
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -558,6 +558,18 @@ def check_datatype(datatype):
 _verify_type(PythonOnlyPoint(1.0, 2.0), PythonOnlyUDT())
 self.assertRaises(ValueError, lambda: _verify_type([1.0, 2.0], 
PythonOnlyUDT()))
 
+schema = StructType().add("key", LongType()).add("val", 
PythonOnlyUDT())
+df = self.spark.createDataFrame(
+[(i % 3, PythonOnlyPoint(float(i), float(i))) for i in 
range(10)],
+schema=schema)
+df.show()
--- End diff --

`DataFrame.show()` gives unnecessary stringification, so this test ends up 
testing unnecessary stuff (in fact it would fail if the UDT didn't have 
`__str__`. I would use `collect()` to force materialization instead.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of Python-o...

2016-06-20 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/13778
  
Another update: 
https://gist.github.com/vlad17/cfcd42f30ea2380df4fb0bfa30dda7ce unresolved



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of Python-o...

2016-06-20 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/13778
  
Update: looks like the above is just an issue with the __str__ method of 
udf-returned UDTs, which is a different bug (a bug that's also pretty harmless).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13778: [SPARK-16062][SPARK-15989][SQL] Fix two bugs of Python-o...

2016-06-20 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/13778
  
Here's an unresolved example: 
https://gist.github.com/vlad17/2db8e14972344c693e8a3f03d91c9c8d


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13724: [SPARK-15973] [PYSPARK] Fix GroupedData Documentation

2016-06-17 Thread vlad17
Github user vlad17 commented on the issue:

https://github.com/apache/spark/pull/13724
  
The issue is that there are still some references to the old 
`GroupedData._jdf` attribute - if you click on the the Jenkins link you'll find 
where (but I pasted below too):
Error Message

```
'GroupedData' object has no attribute '_jdf'
Stacktrace

Traceback (most recent call last):
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/tests.py", 
line 682, in test_aggregator
self.assertEqual([Row(**{"AVG(key#0)": 49.5})], g.mean().collect())
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/group.py", 
line 40, in _api
jdf = getattr(self._jdf, name)(_to_seq(self.sql_ctx._sc, cols))
AttributeError: 'GroupedData' object has no attribute '_jdf'
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org