GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/2369
[SPARK-3500] [SQL] use JavaSchemaRDD as SchemaRDD._jschema_rdd
Currently, SchemaRDD._jschema_rdd is SchemaRDD, the Scala API (coalesce(),
repartition()) can not been called in Python easily, there
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55366505
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20211/consoleFull)
for PR 2369 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55370584
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20211/consoleFull)
for PR 2369 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55476949
This looks good to me. [There's some ongoing discussion on the
JIRA](https://issues.apache.org/jira/browse/SPARK-2797) over whether this
should be included in 1.1.1.
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55477091
I think this is clearly a bug, not a missing feature, since SchemaRDD
instances expose a public method that always throws an exception when called.
I'd like to merge
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2369
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55479226
Backported into `branch-1.1` (a couple of minor merge conflicts, but only
in `tests.py`; I fixed them by hand).
---
If your project is set up for it, you can reply to
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510188
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect()
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510195
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect()
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510214
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect()
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510242
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect()
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510279
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect()
12 matches
Mail list logo