Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130292918
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer(
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18783
Looks like you opened a duplicate of
https://github.com/apache/spark/pull/18777
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user 2ooom closed the pull request at:
https://github.com/apache/spark/pull/18777
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80070 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80070/testReport)**
for PR 18779 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18779
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80070/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18779
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18778#discussion_r130320047
--- Diff:
core/src/test/java/test/org/apache/spark/JavaSparkContextSuite.java ---
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18780
Sure, I added those and took out 18475.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130330782
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer(
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18645
The json4s change above made this change notably simpler. The current
problem is the same, next error:
```
sbt.ForkMain$ForkError: java.lang.ClassCastException: java.lang.Integer
cannot
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18778
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80071/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18778
**[Test build #80071 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80071/testReport)**
for PR 18778 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18645
**[Test build #80074 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80074/testReport)**
for PR 18645 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@maropu As an alternative approach, we can avoid to resolve
`UnresolvedOrdinal` to actual expressions in this rule.
Instead, we can create a `ResolvedOrdinal` and replace it with actual agg
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
yea, just suggestion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18763
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18764
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18784
Overall I trust your judgment if you think it's right to remove this, but I
think it would have to wait for Spark 3.0?
---
If your project is set up for it, you can reply to this email and have
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130293284
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer(
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
@cloud-fan would you plz take a look; this pr focus on the issue of
spark.hadoop.* properties not respected by the CliStateState(cliState), most of
them take affects while we call
Github user 2ooom commented on the issue:
https://github.com/apache/spark/pull/18783
@srowen This one is for master branch, #18777 is for `branch-2.1`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@maropu Thanks for clarifying it.
Although they looks similar, from semantics I'd treat them different rules.
However, I don't have strong opinion on this.
Btw,
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18780
@HyukjinKwon I would add:
https://github.com/apache/spark/pull/18308
https://github.com/apache/spark/pull/18599
https://github.com/apache/spark/pull/18619
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18785
**[Test build #80073 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80073/testReport)**
for PR 18785 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
sorry for my ambiguous explanation and I mean;
```
scala> sql("""select 3, 4, sum(b) from data group by 1, 2""").show
17/07/31 17:13:23 TRACE HiveSessionStateBuilder$$anon$1:
//
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18522
I suspect that this doesn't hurt, because at the point you stop copying
input to a file, you are done with the input, and I don't think there is any
reason that the caller would ever continue
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
@viirya Spark does not support that.
see:https://github.com/apache/spark/pull/17223#issuecomment-286608743
@dongjoon-hyun How about throw exception when users try to change them as
@cloud-fan
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18667#discussion_r130308842
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/LongType.scala ---
@@ -43,7 +43,7 @@ class LongType private() extends IntegralType {
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18784
@ArtRand @susanxhuynh pls review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
I am currently, trying to run some performance tests and see how this
change impacts performance in any case. Meanwhile, if I could get an idea if
things are moving in the right direction that
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18785
I googled a bit before. It looks one way I could find is, we should
manually add `total_ordering` -
`https://github.com/nvie/rq/commit/282f4be9316d608ebbacd6114aab1203591e8f95`
however, this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18783
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
I checked the root cause of this and `SubstituteUnresolvedOrdinals` wrongly
wraps again int literals with `UnresolvedOrdinal`.
```
17/07/31 16:54:50 TRACE
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130300193
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18778
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18784
@srowen ok, yes. I have discussed this with Art and Suzan from Mesosphere
and we made the decision to remove it as it is deprecated for so long. I am
waiting for their comments here.
In the
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18785
Ah OK, so it's a no-go. I wonder if writing python2.7 in the shebang will
work on the Jenkins machines? @shaneknapp
---
If your project is set up for it, you can reply to this email and have your
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18697
A different view to this problem is, in the following part of query plan:
:- *HashAggregate(keys=[parent#228], functions=[],
output=[level1#270]) hashpartitioning(parent#228, 5)
GitHub user 2ooom opened a pull request:
https://github.com/apache/spark/pull/18783
[SPARK-21254] [WebUI] History UI performance fixes
## What changes were proposed in this pull request?
As described in JIRA ticket, History page is taking ~1min to load for cases
when
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
Aha, I feel better idea to add `ResolvedOrdinal`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18783
It would have to go in master first so close the 2.1one
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18763
@srowen Could you help to trigger this job? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18764
@srowen Could you help to trigger this job? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18784
**[Test build #80072 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80072/testReport)**
for PR 18784 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18780
@gatorsmile @tdas to rehash the logic here:
- JIRAs track 'what' needs to happen, and PRs track a 'how' it could be
fixed. Closing a PR shouldn't necessarily mean an important issue is
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@maropu I am not sure if I understand your idea correctly. Those int
literals in group-by is intended to be wrapped in `UnresolvedOrdinal`. So they
can be replaced with aggregation expressions at
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18731#discussion_r130294946
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -347,13 +347,18 @@ class JacksonParser(
GitHub user skonto opened a pull request:
https://github.com/apache/spark/pull/18784
[SPARK-21559][Mesos] remove mesos fine-grained mode
## What changes were proposed in this pull request?
Removes mesos fine-grained mode. Specifically:
- Updates docs.
- Renames
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18785
CC @holdenk @dongjoon-hyun
I know the point is to not support 2.6, but, maybe this is just as good as
anything as a quick fix?
---
If your project is set up for it, you can reply to this email
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/18785
[SPARK-21573][BUILD] Tests failing with run-tests.py SyntaxError
occasionally in Jenkins
## What changes were proposed in this pull request?
Avoid dict comprehension to mae 2.6
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18785
@srowen, I actually tried this way ahead. I just ran `python2.6
run-tests.py` for sure against this PR:
```
Traceback (most recent call last):
File "run-tests.py", line 42, in
Github user 2ooom commented on the issue:
https://github.com/apache/spark/pull/18783
@srowen Done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user lukasbradley commented on the issue:
https://github.com/apache/spark/pull/18674
@susanxhuynh Thank you for your response. I'll keep you updated on what we
learn.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user icexelloss commented on the issue:
https://github.com/apache/spark/pull/18664
@ueshin I amd +1 for fixing `df.collect()` and `df.toPandas()`, I don't
think it is much of a backward-compatibility issue because the current behavior
of `df.collect()` and `df.toPandas()` is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18785
**[Test build #80073 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80073/testReport)**
for PR 18785 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18785
That could just mean the machine doesn't have 2.7 installed then. This may
really be an env issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/18784#discussion_r130383697
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala ---
@@ -784,42 +784,6 @@ class TaskSchedulerImplSuite extends
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18784
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18784
@srowen I made a new commit but didn't get a new build...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user aray commented on a diff in the pull request:
https://github.com/apache/spark/pull/18697#discussion_r130396904
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -65,6 +65,10 @@ abstract class SparkPlan extends QueryPlan[SparkPlan]
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18697
**[Test build #80080 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80080/testReport)**
for PR 18697 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18784
**[Test build #80079 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80079/testReport)**
for PR 18784 at commit
Github user srowen closed the pull request at:
https://github.com/apache/spark/pull/18785
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18785
Just as a last note I checked, after fixing `lint-python`, it failed again
somewhere:
```
Running
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18785
Ugh.. this still fails:
```
Running Python style checks
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130341740
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2350,13 +2354,12 @@ class SparkContext(config: SparkConf) extends
Logging {
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130345159
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2379,8 +2382,13 @@ class SparkContext(config: SparkConf) extends
Logging {
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130359427
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -131,94 +132,35 @@ private[spark] class EventLoggingListener(
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130360348
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -227,6 +169,7 @@ private[spark] class EventLoggingListener(
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130376495
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -39,98 +39,107 @@ import org.apache.spark.util.Utils
* has
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130375161
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListenerBus.scala ---
@@ -27,7 +27,12 @@ private[spark] trait SparkListenerBus
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130341036
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -532,7 +533,10 @@ class SparkContext(config: SparkConf) extends Logging {
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130359204
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -131,94 +132,35 @@ private[spark] class EventLoggingListener(
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130390981
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/bus/ListenerBusQueueImpl.scala
---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18784
Fixed the test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17097
@meteorchenwu Correctness is more important for us. The cached plan will be
reused when we build any other plans. Thus, users might see the out-of-dated
results.
To achieve what you
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18645
**[Test build #80074 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80074/testReport)**
for PR 18645 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18764
**[Test build #3865 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3865/consoleFull)**
for PR 18764 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18785
Yeah go ahead. I pushed your change here anyway
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18785
Will double check this one and merge it if we need no more changes. Or,
will send a PR to your branch or open another PR. I will handle this in any
event.
---
If your project is set up for
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
Hi @gatorsmile, I add some UTs in CliSuite, please check!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18763
**[Test build #3866 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3866/testReport)**
for PR 18763 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18785
**[Test build #80075 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80075/testReport)**
for PR 18785 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130336844
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer(
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18785
@HyukjinKwon this one just tries to explicitly select 2.7. If it fails,
well, I suppose it would already fail the build anyway. If it successfully
picks 2.7 then it should resolve this I guess.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18784
**[Test build #80072 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80072/testReport)**
for PR 18784 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18785
**[Test build #80076 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80076/testReport)**
for PR 18785 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80077 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80077/testReport)**
for PR 18668 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18785
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18785
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80073/
Test PASSed.
---
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18785
@srowen, IIRC, I guess this should be correct when we run this as
executable. If I understood correctly, we run `run-tests` -
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80077 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80077/testReport)**
for PR 18668 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80077/
Test FAILed.
---
Github user mbasmanova commented on the issue:
https://github.com/apache/spark/pull/18421
@gatorsmile, I was on vacation and today is my first day back. I'm planning
to work through the feedback on this PR today.
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18784
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80072/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18784
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18645
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18645
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80074/
Test FAILed.
---
Github user meteorchenwu commented on the issue:
https://github.com/apache/spark/pull/17097
What a bad change it is!
Now it can not support for this scene any more.
https://issues.apache.org/jira/browse/SPARK-21579
---
If your project is set up for it, you can reply to this
1 - 100 of 352 matches
Mail list logo