Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17577
**[Test build #75629 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75629/testReport)**
for PR 17577 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14830
**[Test build #75632 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75632/testReport)**
for PR 14830 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14830
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75632/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14830
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
Sorry,spark java code style is different from the style of my project
team.Now I know, and have been fixed.
Use 2-space indentation in general. For function declarations, use 4
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17557
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75633/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17557
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17557
**[Test build #75633 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75633/testReport)**
for PR 17557 at commit
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/17359
@rxin @cloud-fan @gatorsmile @viirya @tejasapatil Could you please help me
review this PR? Or is there anything I can do on this work?
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16845
**[Test build #75631 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75631/testReport)**
for PR 16845 at commit
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/14830
Jenkins retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17578
[SPARK-20269][Structured Streaming][Examples]add JavaWordCountProducer in
steaming examples
## What changes were proposed in this pull request?
run example of streaming
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17578
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17580
[20269][Structured Streaming][Examples] add JavaWordCountProducer in
steaming examples
## What changes were proposed in this pull request?
run example of streaming kafka,currently
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17130#discussion_r110540400
--- Diff: examples/src/main/python/ml/fpgrowth_example.py ---
@@ -0,0 +1,48 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17533
@squito
Thank you so much for taking look into this.
> we don't want the TSM requesting info from the DAGSCheduler
Sorry I missed this point for the previous change. Now I push the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17572
**[Test build #3653 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3653/testReport)**
for PR 17572 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17533
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75634/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17533
**[Test build #75634 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75634/testReport)**
for PR 17533 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17533
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/17579
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17580
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17077
This looks like an important improvement that might make sense to try and
get in for 2.2 so I'll try and get some reviewing in.
---
If your project is set up for it, you can reply to this email
Github user Stibbons commented on the issue:
https://github.com/apache/spark/pull/14830
I guess a rebased will be welcomed, I can do it by tomorow if you want
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17557
**[Test build #75633 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75633/testReport)**
for PR 17557 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110544338
--- Diff: python/pyspark/sql/tests.py ---
@@ -2038,6 +2038,61 @@ def test_BinaryType_serialization(self):
df =
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75629/
Test FAILed.
---
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/16845
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110538670
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -545,6 +545,57 @@ def partitionBy(self, *cols):
self._jwrite =
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110538647
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -545,6 +545,57 @@ def partitionBy(self, *cols):
self._jwrite =
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16845
**[Test build #75631 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75631/testReport)**
for PR 16845 at commit
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110542103
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -545,6 +545,57 @@ def partitionBy(self, *cols):
self._jwrite =
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17577#discussion_r110545017
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala ---
@@ -407,10 +407,11 @@ final class DataFrameNaFunctions
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75630/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17577
**[Test build #75630 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75630/testReport)**
for PR 17577 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
Title,PR description and motive, has been modified.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16845
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16845
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75631/
Test PASSed.
---
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17579
[20269][Structured Streaming][Examples] add JavaWordCountProducer in
steaming examples
## What changes were proposed in this pull request?
run example of streaming kafka,currently
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17579
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17572
**[Test build #3653 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3653/testReport)**
for PR 17572 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14830
**[Test build #75632 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75632/testReport)**
for PR 14830 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17580
The code style needs to be fixed, and the title. What example is this based
on ? this is the kind of info that you should put in a pull request.
---
If your project is set up for it, you can reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17533
**[Test build #75634 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75634/testReport)**
for PR 17533 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110542557
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -545,6 +545,57 @@ def partitionBy(self, *cols):
self._jwrite =
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r110547379
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -43,17 +43,8 @@ case class LogicalRelation(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17541
**[Test build #75635 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75635/testReport)**
for PR 17541 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17580#discussion_r110540044
--- Diff:
examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCountProducer.java
---
@@ -0,0 +1,76 @@
+/*
+ * Licensed to
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17577
LGTM except one comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17574
@viirya Based on the code change history,
https://github.com/apache/spark/pull/13642 removed the usage of ASM in the test
case `SQLMetricsSuite.scala`. Thus, it is safe to remove the test
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17359#discussion_r110568613
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/NGrams.scala
---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17574
@gatorsmile Thanks for the search. I don't see any usage of it in
`sql/core` now. It is only used in `core`, `repl`, `graphx`. So I am wondering
if we can completely remove it from the dependency.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17541
**[Test build #75640 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75640/testReport)**
for PR 17541 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17541
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17541
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75640/
Test PASSed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17577
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110574182
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -54,8 +54,6 @@ case class
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/17568
ping @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/17330
@cloud-fan Sure Wenchen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17577
**[Test build #75636 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75636/testReport)**
for PR 17577 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75636/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17541
**[Test build #75637 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75637/testReport)**
for PR 17541 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17541
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17541
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75637/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17577
**[Test build #75638 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75638/testReport)**
for PR 17577 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17577
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75638/
Test PASSed.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17581
If the user really need to limit the returned results, why don't directly
use a `limit` operator?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17541
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r110564162
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -359,9 +359,59 @@ abstract class QueryPlan[PlanType <:
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17581
Btw, I am not sure why you said `SQLConf.THRIFTSERVER_INCREMENTAL_COLLECT`
will waste cluster resource. The difference between incremental collect or not
is whether the results will be materialized
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17541
**[Test build #75640 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75640/testReport)**
for PR 17541 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17561
@ueshin, about Jenkins thing, please refer
https://github.com/apache/spark/pull/17469#issuecomment-292663021. It might be
helpful.
---
If your project is set up for it, you can reply to this
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17577
LGTM, do we still need #15994 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
My opinion is:
In the production, the user often select without a limit, often lead to
service offline,this is a general situation, so increase the parameters.
When
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17582
[SPARK-20239][Core] Improve HistoryServer's ACL mechanism
## What changes were proposed in this pull request?
Current SHS (Spark History Server) two different ACLs:
* ACL of
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17582
**[Test build #75641 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75641/testReport)**
for PR 17582 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17581
> In the production, the user often select without a limit, often lead to
service offline,this is a general situation, so increase the parameters.
If the users tend to select without a
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17569
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17582
**[Test build #75643 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75643/testReport)**
for PR 17582 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17574
Thanks! Merging to master/2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17574
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17359#discussion_r110569002
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/NGrams.scala
---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110569876
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17436#discussion_r110576977
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -67,6 +67,9 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable
with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17436#discussion_r110576873
--- Diff: core/src/main/java/org/apache/spark/memory/MemoryConsumer.java ---
@@ -41,7 +41,7 @@ protected MemoryConsumer(TaskMemoryManager
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17359#discussion_r110568699
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/NGrams.scala
---
@@ -0,0 +1,249 @@
+/*
+ * Licensed to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17528
I just found some references about the order. It seems there was a question
about it -
http://stackoverflow.com/questions/18544006/how-do-i-indicate-collate-order-in-roxygen2
and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110570339
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17574
Meh let's not bother. There isn't any harm in the current setup since it's
already a transitive dependency. Why waste time on those?
---
If your project is set up for it, you can reply to this email
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17577
Merged into master.
@cloud-fan #15994 is still needed when a user wants to fill in default long
value with a extremely large value into NaN. Thanks.
---
If your project is set up for it,
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17580
What is the purpose of adding this example? I think we already have a
`KafkaWordCountProducer` for the convenience of Kafka streaming example, and we
could use that to send events to Kafka. I
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17359#discussion_r110568482
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/NGrams.scala
---
@@ -0,0 +1,249 @@
+/*
+ * Licensed to
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17569
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110570287
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17541
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17580
When a user use spark to develop a stream application, he first
wants to find and learn example program in 'spark \ examples \ src \ main \
java \ org \ apache \ spark \ examples \
1 - 100 of 168 matches
Mail list logo