[GitHub] spark pull request #12370: [SPARK-14599][ML] BaggedPoint should support samp...

2016-10-10 Thread sethah
Github user sethah closed the pull request at:

https://github.com/apache/spark/pull/12370


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12370: [SPARK-14599][ML] BaggedPoint should support samp...

2016-06-06 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/12370#discussion_r66007915
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/BaggedPoint.scala ---
@@ -33,13 +33,20 @@ import org.apache.spark.util.random.XORShiftRandom
  * this datum has 1 copy, 0 copies, and 4 copies in the 3 subsamples, 
respectively.
  *
  * @param datum  Data instance
- * @param subsampleWeights  Weight of this instance in each subsampled 
dataset.
- *
- * TODO: This does not currently support (Double) weighted instances.  
Once MLlib has weighted
- *   dataset support, update.  (We store subsampleWeights as Double 
for this future extension.)
+ * @param subsampleCounts  Number of samples of this instance in each 
subsampled dataset.
+ * @param sampleWeight The weight of this instance.
  */
-private[spark] class BaggedPoint[Datum](val datum: Datum, val 
subsampleWeights: Array[Double])
-  extends Serializable
+private[spark] class BaggedPoint[Datum](
+val datum: Datum,
+val subsampleCounts: Array[Int],
+val sampleWeight: Double) extends Serializable {
+
+  /**
+   * Subsample counts weighted by the sample weight.
+   */
+  def weightedCounts: Array[Double] = subsampleCounts.map(_ * sampleWeight)
--- End diff --

I added this as a convenience method. If we make it a val then we add 
storage overhead in the class which is redundant. If preferable, we could 
remove it entirely.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12370: [SPARK-14599][ML] BaggedPoint should support samp...

2016-06-06 Thread MechCoder
Github user MechCoder commented on a diff in the pull request:

https://github.com/apache/spark/pull/12370#discussion_r65994490
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/BaggedPoint.scala ---
@@ -33,13 +33,20 @@ import org.apache.spark.util.random.XORShiftRandom
  * this datum has 1 copy, 0 copies, and 4 copies in the 3 subsamples, 
respectively.
  *
  * @param datum  Data instance
- * @param subsampleWeights  Weight of this instance in each subsampled 
dataset.
- *
- * TODO: This does not currently support (Double) weighted instances.  
Once MLlib has weighted
- *   dataset support, update.  (We store subsampleWeights as Double 
for this future extension.)
+ * @param subsampleCounts  Number of samples of this instance in each 
subsampled dataset.
+ * @param sampleWeight The weight of this instance.
  */
-private[spark] class BaggedPoint[Datum](val datum: Datum, val 
subsampleWeights: Array[Double])
-  extends Serializable
+private[spark] class BaggedPoint[Datum](
+val datum: Datum,
+val subsampleCounts: Array[Int],
+val sampleWeight: Double) extends Serializable {
+
+  /**
+   * Subsample counts weighted by the sample weight.
+   */
+  def weightedCounts: Array[Double] = subsampleCounts.map(_ * sampleWeight)
--- End diff --

Should this be a `val`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org