Github user kevinyu98 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r114703725
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -461,68 +462,270 @@ case class
Github user QQshu1 commented on the issue:
https://github.com/apache/spark/pull/16561
hi , I have a question, why we should Eliminate View in the first of the
optimizer.?
thank you.@jiangxb1987
---
If your project is set up for it, you can reply to this email and have your
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114702184
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,34 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17678#discussion_r114700907
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/ObjectAggregationIterator.scala
---
@@ -83,6 +85,7 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17814
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76439/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17814
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17814
**[Test build #76439 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76439/testReport)**
for PR 17814 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17856
**[Test build #76441 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76441/testReport)**
for PR 17856 at commit
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17856
[SPARK-19660][SQL] Replace the deprecated property name fs.default.name to
fs.defaultFS that newly introduced
## What changes were proposed in this pull request?
Replace the deprecated
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17825
you know - it would definitely be a better experience for the R user, so we
should try that - it might break with the generic in `stats::alias` though
and speaking of, we should
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17817
merged to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user lins05 commented on the issue:
https://github.com/apache/spark/pull/17750
IMO we should not enable checkpointing in fine-grained mode. Because with
checkpointing enabled, mesos agents would persist all status updates to disk
which means great I/O cost because fine-grained
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17855
**[Test build #76440 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76440/testReport)**
for PR 17855 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17855
cc @srowen, could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17855
[INFRA] Close stale PRs
## What changes were proposed in this pull request?
This PR proposes to close a stale PR, several PRs suggested to be closed by
a committer and obviously
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17817
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17814
**[Test build #76439 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76439/testReport)**
for PR 17814 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/9
Do you guys mind if I propose to close this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17854
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user mariahualiu opened a pull request:
https://github.com/apache/spark/pull/17854
[SPARK-20564][Deploy] Reduce massive executor failures when executor count
is large (>2000)
## What changes were proposed in this pull request?
In applications that use over 2000
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698221
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,34 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698264
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,34 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698385
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,34 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698072
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,34 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698515
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2147,6 +2147,18 @@ test_that("join(), crossJoin() and merge() on a
DataFrame", {
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698548
--- Diff: R/pkg/R/generics.R ---
@@ -572,6 +572,10 @@ setGeneric("first", function(x, ...) {
standardGeneric("first") })
#' @export
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17851#discussion_r114698295
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,34 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17853
@crjk21 it looks mistakenly open. Could you close this please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17853
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user crjk21 opened a pull request:
https://github.com/apache/spark/pull/17853
Branch 2.2
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain how this patch
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17831
Thanks for making this PR with the details @gatorsmile it appears to be
orthogonal to this change. Historically we've treated Python API parity fixes
as closer to bug fixes rather than new features
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17836
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76438/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17836
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r114696480
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
---
@@ -100,7 +114,14 @@ public void
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17836
**[Test build #76438 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76438/testReport)**
for PR 17836 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17850
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17850
Merging in master/2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17100
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76434/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17100
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17831
cc @viirya too who I believe is appropriate to review this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17100
**[Test build #76434 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76434/testReport)**
for PR 17100 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17825
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76437/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17825
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17825
**[Test build #76437 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76437/testReport)**
for PR 17825 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17841
Where is the risk? High concurrency? Transaction processing? This is where
I am puzzled.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17836
**[Test build #76438 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76438/testReport)**
for PR 17836 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17723
To ask a more direct question:
The only public interface being added in this change is
`ServiceCredentialProvider`. It's an interface that service-specific libraries
(e.g. a Solr connector,
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17831
It sounds orthogonal to me as well. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76431/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #76431 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76431/testReport)**
for PR 17540 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17841
I think @srowen already clarified it very clearly, you can use it at your
own risk, but to make it public and add to the doc should be well considered.
---
If your project is set up for it, you
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17825#discussion_r114687096
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,25 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17851
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76436/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17851
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17851
**[Test build #76436 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76436/testReport)**
for PR 17851 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17841
@jerryshao
I said that the use of the scene is real, do you agree?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17825
**[Test build #76437 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76437/testReport)**
for PR 17825 at commit
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17825
I wonder if it would make more sense to make `alias` generic for both
`object` and `data`:
signature(object = "SparkDataFrame", data = "character")
and skip the type checks.
GitHub user jyu00 opened a pull request:
https://github.com/apache/spark/pull/17852
[SPARK-20546][Deploy] spark-class gets syntax error in posix mode
## What changes were proposed in this pull request?
Updated spark-class to turn off posix mode so the process substitution
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17852
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17847#discussion_r114683203
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -523,7 +530,8 @@ object DataSource {
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17831
I think there is no conflict between #17848 and this. As of 2.2 we no
longer return `UserDefinedFunction` from `udf` (and we never documented
`UserDefinedFunctions`) so changes will have to be
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17847#discussion_r114683317
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -483,35 +483,42 @@ case class DataSource(
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17723
> a) we have explicitly based our support on it
What does this mean?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
I'm not an expert on the metrics path, but I think we should be able to
join up the actual physical plans well enough to display everything. I doubt it
will be a long-term regression, but I don't
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17851
**[Test build #76436 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76436/testReport)**
for PR 17851 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17723
> This was my point - we should not introduce system specific api's into
spark core infrastructure api's/spi's
Sorry, I still have no idea what your point is. How do you suggest we
support
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
I was not saying there is no way to fix metrics. Just asking your thoughts.
If we don't have a concrete plan, it might be a long-term regression if just
merging this PR.
I just want to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17851
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17851
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76435/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17851
**[Test build #76435 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76435/testReport)**
for PR 17851 at commit
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17723
@vanzip wrote:
> So, this is purely about handling Hadoop authentication for Hadoop
services.
This was my point - we should not introduce system specific api's into
spark core
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17851
**[Test build #76435 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76435/testReport)**
for PR 17851 at commit
GitHub user zero323 opened a pull request:
https://github.com/apache/spark/pull/17851
[SPARK-20585][SPARKR] R generic hint support
## What changes were proposed in this pull request?
Adds support for generic hints on `SparkDataFrame`
## How was this patch tested?
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17847
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76427/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17847
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17847
**[Test build #76427 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76427/testReport)**
for PR 17847 at commit
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
@zsxwing, you don't think there's a way to fix metrics? I don't know
exactly how to fix the UI to show two plans worth of metrics, but it seems like
it can be done. What about also updating
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17793
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76430/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17793
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17793
**[Test build #76430 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76430/testReport)**
for PR 17793 at commit
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17848
Disabling optimizations aside, to what extent can we actually support
nondeterministic functions? Right now a common user mistake is to run RNG
inside an UDF. `nonDeterministic`could suggest it is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17850
**[Test build #76433 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76433/testReport)**
for PR 17850 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17850
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76433/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17850
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
> @zsxwing, I don't know. Sounds like we should fix the underlying problem
that there are 2 physical plans.
SQL metrics won't work without fixing it. IMO, that's more serious than the
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
@zsxwing, I don't know. Sounds like we should fix the underlying problem
that there are 2 physical plans.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17831
Yea, thanks for chiming in. It helped me a lot to understand the context.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
> That requires breaking the command into two phases, one to get a
SparkPlan and one to run it.
Yeah, but how to show metrics you get from a plan on another plan's DAG
considering these
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17831
Thanks @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
@zsxwing, there should be a fix for the metrics without waiting for all of
the bad plans to be fixed (which is to basically eliminate the use of
`ExecutedCommandExec`).
The metrics are
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/17346
thank you @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
@rdblue I just tested this PR and found that I could not see any SQL
metrics on Web UI. This is pretty important for many users to analyze their
queries.
What's your plan to fix it? As far
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17834
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17834
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76426/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17834
**[Test build #76426 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76426/testReport)**
for PR 17834 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17100
**[Test build #76434 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76434/testReport)**
for PR 17100 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17850
**[Test build #76433 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76433/testReport)**
for PR 17850 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17850
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
1 - 100 of 366 matches
Mail list logo