[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/15777


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-20 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88833295
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -95,8 +95,7 @@ class BisectingKMeansModel private[ml] (
   @Since("2.0.0")
   override def copy(extra: ParamMap): BisectingKMeansModel = {
 val copied = copyValues(new BisectingKMeansModel(uid, parentModel), 
extra)
-if (trainingSummary.isDefined) copied.setSummary(trainingSummary.get)
-copied.setParent(this.parent)
+copied.setSummary(trainingSummary).setParent(this.parent)
--- End diff --

Updated. I also added tests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-19 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88778015
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -95,8 +95,7 @@ class BisectingKMeansModel private[ml] (
   @Since("2.0.0")
   override def copy(extra: ParamMap): BisectingKMeansModel = {
 val copied = copyValues(new BisectingKMeansModel(uid, parentModel), 
extra)
-if (trainingSummary.isDefined) copied.setSummary(trainingSummary.get)
-copied.setParent(this.parent)
+copied.setSummary(trainingSummary).setParent(this.parent)
--- End diff --

This looks better. Could you make the change for Scala LiR, LoR, GLM and 
KMeans as well? I think they should be consistent. Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-17 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88495818
  
--- Diff: python/pyspark/ml/tests.py ---
@@ -1097,6 +1097,44 @@ def test_logistic_regression_summary(self):
 sameSummary = model.evaluate(df)
 self.assertAlmostEqual(sameSummary.areaUnderROC, s.areaUnderROC)
 
+def test_gaussian_mixture_summary(self):
+data = [(Vectors.dense(1.0),), (Vectors.dense(5.0),), 
(Vectors.dense(10.0),),
+(Vectors.sparse(1, [], []),)]
+df = self.spark.createDataFrame(data, ["features"])
+gmm = GaussianMixture(k=2)
+model = gmm.fit(df)
+self.assertTrue(model.hasSummary)
+s = model.summary
+self.assertTrue(isinstance(s.predictions, DataFrame))
+self.assertEqual(s.probabilityCol, "probability")
+self.assertTrue(isinstance(s.probability, DataFrame))
+self.assertEqual(s.featuresCol, "features")
+self.assertEqual(s.predictionCol, "prediction")
+self.assertTrue(isinstance(s.cluster, DataFrame))
+self.assertEqual(len(s.clusterSizes), 2)
+self.assertEqual(s.k, 2)
+
+model._call_java("setSummary", None)
--- End diff --

@yanboliang I switched it up. Let me know what you think


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-17 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88485754
  
--- Diff: python/pyspark/ml/tests.py ---
@@ -1097,6 +1097,44 @@ def test_logistic_regression_summary(self):
 sameSummary = model.evaluate(df)
 self.assertAlmostEqual(sameSummary.areaUnderROC, s.areaUnderROC)
 
+def test_gaussian_mixture_summary(self):
+data = [(Vectors.dense(1.0),), (Vectors.dense(5.0),), 
(Vectors.dense(10.0),),
+(Vectors.sparse(1, [], []),)]
+df = self.spark.createDataFrame(data, ["features"])
+gmm = GaussianMixture(k=2)
+model = gmm.fit(df)
+self.assertTrue(model.hasSummary)
+s = model.summary
+self.assertTrue(isinstance(s.predictions, DataFrame))
+self.assertEqual(s.probabilityCol, "probability")
+self.assertTrue(isinstance(s.probability, DataFrame))
+self.assertEqual(s.featuresCol, "features")
+self.assertEqual(s.predictionCol, "prediction")
+self.assertTrue(isinstance(s.cluster, DataFrame))
+self.assertEqual(len(s.clusterSizes), 2)
+self.assertEqual(s.k, 2)
+
+model._call_java("setSummary", None)
--- End diff --

Yeah, I think it makes sense to add summary related doc tests for 
algorithms to illustrate the output of summary. So one more line to check 
```hasSummary``` does not seam to have much impact. @jkbradley What's your 
opinion? Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-17 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88483092
  
--- Diff: python/pyspark/ml/tests.py ---
@@ -1097,6 +1097,44 @@ def test_logistic_regression_summary(self):
 sameSummary = model.evaluate(df)
 self.assertAlmostEqual(sameSummary.areaUnderROC, s.areaUnderROC)
 
+def test_gaussian_mixture_summary(self):
+data = [(Vectors.dense(1.0),), (Vectors.dense(5.0),), 
(Vectors.dense(10.0),),
+(Vectors.sparse(1, [], []),)]
+df = self.spark.createDataFrame(data, ["features"])
+gmm = GaussianMixture(k=2)
+model = gmm.fit(df)
+self.assertTrue(model.hasSummary)
+s = model.summary
+self.assertTrue(isinstance(s.predictions, DataFrame))
+self.assertEqual(s.probabilityCol, "probability")
+self.assertTrue(isinstance(s.probability, DataFrame))
+self.assertEqual(s.featuresCol, "features")
+self.assertEqual(s.predictionCol, "prediction")
+self.assertTrue(isinstance(s.cluster, DataFrame))
+self.assertEqual(len(s.clusterSizes), 2)
+self.assertEqual(s.k, 2)
+
+model._call_java("setSummary", None)
--- End diff --

We can do it, though typically the doc tests are for things that we want to 
test that also illustrate functionality to the users. And @jkbradley seemed 
against adding it as a doc test 
[here](https://github.com/apache/spark/pull/13557). I'll add it for now and we 
can revert it if we decide that's best.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-16 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88269525
  
--- Diff: python/pyspark/ml/clustering.py ---
@@ -346,6 +453,27 @@ def computeCost(self, dataset):
 """
 return self._call_java("computeCost", dataset)
 
+@property
+@since("2.1.0")
+def hasSummary(self):
+"""
+Indicates whether a training summary exists for this model 
instance.
+"""
+return self._call_java("hasSummary")
+
+@property
+@since("2.1.0")
+def summary(self):
+"""
+Gets summary (e.g. cluster assignments, cluster sizes) of the 
model trained on the
+training set. An exception is thrown if no summary exists.
+"""
+if self.hasSummary:
+return GaussianMixtureSummary(self._call_java("summary"))
--- End diff --

Wow, good catch!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-16 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88252426
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -132,7 +132,7 @@ class BisectingKMeansModel private[ml] (
   private var trainingSummary: Option[BisectingKMeansSummary] = None
 
   private[clustering] def setSummary(summary: BisectingKMeansSummary): 
this.type = {
-this.trainingSummary = Some(summary)
+this.trainingSummary = Option(summary)
--- End diff --

I'd more prefer to make the argument as 
```Option[BisectingKMeansSummary]``` like:
```
private[clustering] def setSummary(summary: 
Option[BisectingKMeansSummary]): this.type = {
this.trainingSummary = summary
this
}
```
And test ```summary == None``` by:
```
model.setSummary(None)
assert(!model.hasSummary)
```
Since I think ```setSummary(null)``` and test whether it existing is very 
tricky. The type of ```summary``` is ```Option[BisectingKMeansSummary]``` and 
with ```None``` as default value, so ```setSummary(None)``` should make more 
sense for the scenario that the model does not have summary.
I saw the reason for make this change is that you want to call 
```setSummary``` at Python side, and Python None would be converted to 
```null``` in Scala. But I think this is private function, we don't need to run 
test across Scala and Python, since private function should not be called by 
users.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-16 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88253073
  
--- Diff: python/pyspark/ml/clustering.py ---
@@ -17,16 +17,75 @@
 
 from pyspark import since, keyword_only
 from pyspark.ml.util import *
-from pyspark.ml.wrapper import JavaEstimator, JavaModel
+from pyspark.ml.wrapper import JavaEstimator, JavaModel, JavaWrapper
 from pyspark.ml.param.shared import *
 from pyspark.ml.common import inherit_doc
 
-__all__ = ['BisectingKMeans', 'BisectingKMeansModel',
+__all__ = ['BisectingKMeans', 'BisectingKMeansModel', 
'BisectingKMeansSummary',
+   'ClusteringSummary',
--- End diff --

+1 @zhengruifeng 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-16 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88256003
  
--- Diff: python/pyspark/ml/clustering.py ---
@@ -346,6 +453,27 @@ def computeCost(self, dataset):
 """
 return self._call_java("computeCost", dataset)
 
+@property
+@since("2.1.0")
+def hasSummary(self):
+"""
+Indicates whether a training summary exists for this model 
instance.
+"""
+return self._call_java("hasSummary")
+
+@property
+@since("2.1.0")
+def summary(self):
+"""
+Gets summary (e.g. cluster assignments, cluster sizes) of the 
model trained on the
+training set. An exception is thrown if no summary exists.
+"""
+if self.hasSummary:
+return GaussianMixtureSummary(self._call_java("summary"))
--- End diff --

Typo, should be ```BisectingKMeansSummary```?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-16 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88255016
  
--- Diff: python/pyspark/ml/classification.py ---
@@ -309,13 +309,16 @@ def interceptVector(self):
 @since("2.0.0")
 def summary(self):
 """
-Gets summary (e.g. residuals, mse, r-squared ) of model on
-training set. An exception is thrown if
-`trainingSummary is None`.
+Gets summary (e.g. accuracy/precision/recall, objective history, 
total iterations) of model
+trained on the training set. An exception is thrown if 
`trainingSummary is None`.
 """
-java_blrt_summary = self._call_java("summary")
-# Note: Once multiclass is added, update this to return correct 
summary
-return BinaryLogisticRegressionTrainingSummary(java_blrt_summary)
+if self.hasSummary:
+java_blrt_summary = self._call_java("summary")
+# Note: Once multiclass is added, update this to return 
correct summary
+return 
BinaryLogisticRegressionTrainingSummary(java_blrt_summary)
+else:
+raise RuntimeError("No training summary available for this %s" 
%
--- End diff --

I like this change, we should always throw an exception easy to understand 
by users.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-16 Thread yanboliang
Github user yanboliang commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r88261887
  
--- Diff: python/pyspark/ml/tests.py ---
@@ -1097,6 +1097,44 @@ def test_logistic_regression_summary(self):
 sameSummary = model.evaluate(df)
 self.assertAlmostEqual(sameSummary.areaUnderROC, s.areaUnderROC)
 
+def test_gaussian_mixture_summary(self):
+data = [(Vectors.dense(1.0),), (Vectors.dense(5.0),), 
(Vectors.dense(10.0),),
+(Vectors.sparse(1, [], []),)]
+df = self.spark.createDataFrame(data, ["features"])
+gmm = GaussianMixture(k=2)
+model = gmm.fit(df)
+self.assertTrue(model.hasSummary)
+s = model.summary
+self.assertTrue(isinstance(s.predictions, DataFrame))
+self.assertEqual(s.probabilityCol, "probability")
+self.assertTrue(isinstance(s.probability, DataFrame))
+self.assertEqual(s.featuresCol, "features")
+self.assertEqual(s.predictionCol, "prediction")
+self.assertTrue(isinstance(s.cluster, DataFrame))
+self.assertEqual(len(s.clusterSizes), 2)
+self.assertEqual(s.k, 2)
+
+model._call_java("setSummary", None)
--- End diff --

Why we test this? Actually ```setSummary``` is private and it should not 
generate a model w/o summary in ordinary case. I think we should only test the 
public API for Python. Further more, if we want to test model w/o summary, we 
need to write a dummy Scala model w/o summary, and check ```hasSummary``` 
directly at Python side. I think if the Scala function is not public, we may 
not confirm Scala/Python compatibility, so testing is also not very make sense.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-15 Thread zhengruifeng
Github user zhengruifeng commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r87966225
  
--- Diff: python/pyspark/ml/clustering.py ---
@@ -17,16 +17,75 @@
 
 from pyspark import since, keyword_only
 from pyspark.ml.util import *
-from pyspark.ml.wrapper import JavaEstimator, JavaModel
+from pyspark.ml.wrapper import JavaEstimator, JavaModel, JavaWrapper
 from pyspark.ml.param.shared import *
 from pyspark.ml.common import inherit_doc
 
-__all__ = ['BisectingKMeans', 'BisectingKMeansModel',
+__all__ = ['BisectingKMeans', 'BisectingKMeansModel', 
'BisectingKMeansSummary',
+   'ClusteringSummary',
--- End diff --

I think we dont need to expose `ClusteringSummary`, for in the scala side 
`ClusteringSummary` is private in [clustering].


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-14 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r87841411
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/clustering/BisectingKMeans.scala ---
@@ -132,7 +132,7 @@ class BisectingKMeansModel private[ml] (
   private var trainingSummary: Option[BisectingKMeansSummary] = None
 
   private[clustering] def setSummary(summary: BisectingKMeansSummary): 
this.type = {
-this.trainingSummary = Some(summary)
+this.trainingSummary = Option(summary)
--- End diff --

per @holdenk's suggestion, I changed the `setSummary` to use `Option.apply` 
which treats `null` as `None`. This allows us to exercise the test case for 
`hasSummary` when `summary == None`. This was never tested in Scala either so I 
added unit tests for both Scala and Python.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-05 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r86676702
  
--- Diff: python/pyspark/ml/tests.py ---
@@ -1097,6 +1097,42 @@ def test_logistic_regression_summary(self):
 sameSummary = model.evaluate(df)
 self.assertAlmostEqual(sameSummary.areaUnderROC, s.areaUnderROC)
 
+def test_gaussian_mixture_summary(self):
+data = [(Vectors.dense(1.0),), (Vectors.dense(5.0),), 
(Vectors.dense(10.0),),
+(Vectors.sparse(1, [], []),)]
+df = self.spark.createDataFrame(data, ["features"])
+gmm = GaussianMixture(k=2)
+model = gmm.fit(df)
+self.assertTrue(model.hasSummary)
+s = model.summary
+self.assertTrue(isinstance(s.predictions, DataFrame))
+self.assertEqual(s.probabilityCol, "probability")
+self.assertTrue(isinstance(s.probability, DataFrame))
+self.assertEqual(s.featuresCol, "features")
+self.assertEqual(s.predictionCol, "prediction")
+self.assertTrue(isinstance(s.cluster, DataFrame))
+self.assertEqual(len(s.clusterSizes), 2)
+self.assertEqual(s.k, 2)
+
+# TODO: test when there is no summary
--- End diff --

It might make sense to update `setSummary` to treat null as empty (e.g. 
`Option` instead of `Some`)) for easy testing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-05 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r86676679
  
--- Diff: python/pyspark/ml/classification.py ---
@@ -309,13 +309,16 @@ def interceptVector(self):
 @since("2.0.0")
 def summary(self):
 """
-Gets summary (e.g. residuals, mse, r-squared ) of model on
-training set. An exception is thrown if
-`trainingSummary is None`.
+Gets summary (e.g. accuracy/precision/recall, objective history, 
total iterations) of model
+trained on the training set. An exception is thrown if 
`trainingSummary is None`.
 """
-java_blrt_summary = self._call_java("summary")
-# Note: Once multiclass is added, update this to return correct 
summary
-return BinaryLogisticRegressionTrainingSummary(java_blrt_summary)
+if self.hasSummary:
+java_blrt_summary = self._call_java("summary")
+# Note: Once multiclass is added, update this to return 
correct summary
+return 
BinaryLogisticRegressionTrainingSummary(java_blrt_summary)
+else:
+raise RuntimeError("No training summary available for this %s" 
%
--- End diff --

I think thats generally a good improvement, the `Py4J` errors are often 
confusing to end users.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-04 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r86653312
  
--- Diff: python/pyspark/ml/classification.py ---
@@ -309,13 +309,16 @@ def interceptVector(self):
 @since("2.0.0")
 def summary(self):
 """
-Gets summary (e.g. residuals, mse, r-squared ) of model on
-training set. An exception is thrown if
-`trainingSummary is None`.
+Gets summary (e.g. accuracy/precision/recall, objective history, 
total iterations) of model
+trained on the training set. An exception is thrown if 
`trainingSummary is None`.
 """
-java_blrt_summary = self._call_java("summary")
-# Note: Once multiclass is added, update this to return correct 
summary
-return BinaryLogisticRegressionTrainingSummary(java_blrt_summary)
+if self.hasSummary:
+java_blrt_summary = self._call_java("summary")
+# Note: Once multiclass is added, update this to return 
correct summary
+return 
BinaryLogisticRegressionTrainingSummary(java_blrt_summary)
+else:
+raise RuntimeError("No training summary available for this %s" 
%
--- End diff --

Before, this would throw a `Py4JJavaError`. I think it's slightly better to 
throw a `RuntimeError` here as is done in Scala.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-04 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/15777#discussion_r86653456
  
--- Diff: python/pyspark/ml/tests.py ---
@@ -1097,6 +1097,42 @@ def test_logistic_regression_summary(self):
 sameSummary = model.evaluate(df)
 self.assertAlmostEqual(sameSummary.areaUnderROC, s.areaUnderROC)
 
+def test_gaussian_mixture_summary(self):
+data = [(Vectors.dense(1.0),), (Vectors.dense(5.0),), 
(Vectors.dense(10.0),),
+(Vectors.sparse(1, [], []),)]
+df = self.spark.createDataFrame(data, ["features"])
+gmm = GaussianMixture(k=2)
+model = gmm.fit(df)
+self.assertTrue(model.hasSummary)
+s = model.summary
+self.assertTrue(isinstance(s.predictions, DataFrame))
+self.assertEqual(s.probabilityCol, "probability")
+self.assertTrue(isinstance(s.probability, DataFrame))
+self.assertEqual(s.featuresCol, "features")
+self.assertEqual(s.predictionCol, "prediction")
+self.assertTrue(isinstance(s.cluster, DataFrame))
+self.assertEqual(len(s.clusterSizes), 2)
+self.assertEqual(s.k, 2)
+
+# TODO: test when there is no summary
--- End diff --

We should test that `hasSummary` returns `False` when there is no summary 
available, and that `summary` throws an exception. The problem I'm having is 
that I'm not sure how to create this test case. The only way to get a model is 
by calling fit, which will produce a model with a summary. Calling 
`model._call_java("setSummary", None)` doesn't work either. Is there some way 
that I'm missing?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15777: [SPARK-18282][ML][PYSPARK] Add python clustering ...

2016-11-04 Thread sethah
GitHub user sethah opened a pull request:

https://github.com/apache/spark/pull/15777

[SPARK-18282][ML][PYSPARK] Add python clustering summaries for GMM and BKM

## What changes were proposed in this pull request?

Add model summary APIs for `GaussianMixtureModel` and 
`BisectingKMeansModel` in pyspark.

## How was this patch tested?

Unit tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sethah/spark pyspark_cluster_summaries

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/15777.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #15777






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org