Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82332285
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -17,47 +17,130 @@
package
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15375#discussion_r82331701
--- Diff: R/pkg/R/context.R ---
@@ -123,19 +126,48 @@ parallelize <- function(sc, coll, numSlices = 1) {
if (numSlices > length(coll))
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15246
Hi, @srowen all tests passed this time. Could you please review this PR
again? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/15375
Odd, this is the error from appveyor:
```
ontext: Fail to set Spark caller context
java.lang.ClassNotFoundException: org.apache.hadoop.ipc.CallerContext
at
Github user koertkuipers commented on the issue:
https://github.com/apache/spark/pull/15382
i think working dir makes more sense than home dir. but could this catch
people by surprise because we now expect write permission in the working dir?
---
If your project is set up for it,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15375#discussion_r82331194
--- Diff: R/pkg/R/context.R ---
@@ -126,13 +126,13 @@ parallelize <- function(sc, coll, numSlices = 1) {
if (numSlices > length(coll))
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82330973
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -17,47 +17,130 @@
package
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/13690#discussion_r82330771
--- Diff: R/pkg/inst/tests/testthat/test_mllib.R ---
@@ -791,4 +791,59 @@ test_that("spark.kstest", {
expect_match(capture.output(stats)[1],
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15246
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15246
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66482/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15246
**[Test build #66482 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66482/consoleFull)**
for PR 15246 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15361
@kxepal Sure, thanks for confirming!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15351
Merging to 2.0. @dongjoon-hyun can you close this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/13690#discussion_r82330422
--- Diff: R/pkg/R/mllib.R ---
@@ -117,7 +132,7 @@ NULL
#' @export
#' @seealso \link{spark.glm}, \link{glm},
#' @seealso \link{spark.als},
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15351
@dongjoon-hyun it LGTM. It is just a rather big patch to backport, for
something that is not a bug fix. But I'll merge it.
---
If your project is set up for it, you can reply to this email and
Github user kxepal commented on the issue:
https://github.com/apache/spark/pull/15361
@HyukjinKwon
Oh, great news! It seems it's me backported this patch to 2.0.0
incorrectly. I'm sorry for false alarm then - suddenly, I wasn't able to test
it with master.
I'll do one
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/13690
could you fix the test failure?
```
Duplicated \argument entries in documentation object 'spark.decisionTree':
'newData' '...' 'object' '...' 'x'
```
---
If your project is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15388
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66481/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15389
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15388
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15389
**[Test build #66485 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66485/consoleFull)**
for PR 15389 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329856
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15389
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66485/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15388
**[Test build #66481 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66481/consoleFull)**
for PR 15388 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15389
**[Test build #66485 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66485/consoleFull)**
for PR 15389 at commit
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/15389
[SPARK-17817][PySpark] PySpark RDD Repartitioning Results in Highly Skewed
Partition Sizes
## What changes were proposed in this pull request?
Quoted from JIRA description:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329644
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9.
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329571
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9.
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/15218
Btw, taking a step back, I am not sure this will work as you expect it to.
Other than a few taskset's - those without locality information - the
schedule is going to be highly biased
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329341
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15364
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/15364
Merged into master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15375
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15375
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66472/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15375
**[Test build #66472 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66472/consoleFull)**
for PR 15375 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15381#discussion_r82326116
--- Diff:
sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java
---
@@ -90,8 +95,21 @@ public void run() {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15354
**[Test build #66484 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66484/consoleFull)**
for PR 15354 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14531
@sitalkedia Yeah, I saw it. Thank you for investigation. Normally, we do
not want to add many configuration flags. It hurts the usability. Let @rxin
make a decision whether we should add another
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15367
No, if we backport this I would plan to continue to backport changes (that
are safe) until the next release. Either way this should not affect what goes
into master.
---
If your project is set
Github user zsxwing closed the pull request at:
https://github.com/apache/spark/pull/15385
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15367
Does backporting reduce the likelihood of change if user feedback indicates
we got it wrong?
My technical concerns were largely addressed, that's my big remaining
organizational concern.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66483/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15385
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66483 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66483/consoleFull)**
for PR 15307 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15385
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66470/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15385
**[Test build #66470 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66470/consoleFull)**
for PR 15385 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15366
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66480/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15366
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15366
**[Test build #66480 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66480/consoleFull)**
for PR 15366 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15387
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15387
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66479/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15387
**[Test build #66479 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66479/consoleFull)**
for PR 15387 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15387
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15387
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66477/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15387
**[Test build #66477 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66477/consoleFull)**
for PR 15387 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66483 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66483/consoleFull)**
for PR 15307 at commit
Github user squito commented on the issue:
https://github.com/apache/spark/pull/15249
@mridulm we had considered that approach earlier on as well -- I don't
think it works because you can also have resources which are not totally
broken, but are flaky for a long period of time.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15246
**[Test build #66482 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66482/consoleFull)**
for PR 15246 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66478/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66478 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66478/consoleFull)**
for PR 15307 at commit
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15367
@marmbrus @zsxwing I agree its experimental and we should have more
flexibility here with backports. I also very much agree that structured
streaming in its current state on 2.0 isn't usable - but
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/15332
LGTM. see if @davies @liancheng have other comments about this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15388
**[Test build #66481 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66481/consoleFull)**
for PR 15388 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/15388
cc @hvanhovell @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/15388
[SPARK-17821][SQL] Support And and Or in Expression Canonicalize
## What changes were proposed in this pull request?
Currently `Canonicalize` object doesn't support `And` and `Or`. So we
Github user dhruve commented on the issue:
https://github.com/apache/spark/pull/15370
If we assume file name of the form "part-[0-9]+"
* Case 1: *Entire RDD* => Verification of file name while reconstructing
would be satisfied as we read all the checkpointed part files.
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/15354#discussion_r82322230
--- Diff: python/pyspark/sql/functions.py ---
@@ -1729,6 +1729,29 @@ def from_json(col, schema, options={}):
return Column(jc)
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/15379#discussion_r82321597
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -289,8 +289,8 @@ def text(self, paths):
[Row(value=u'hello'), Row(value=u'this')]
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15246#discussion_r82321891
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -66,13 +67,14 @@ class SQLQuerySuite extends QueryTest
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15386
Thanks for working on this - the pylint script found a style problem (PEP8
checks failed.
./python/pyspark/sql/tests.py:1709:54: E231 missing whitespace after ',') -
if you want to test the
Github user zhzhan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15218#discussion_r82321008
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskAssigner.scala
---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15375
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15375
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66467/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15375
**[Test build #66467 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66467/consoleFull)**
for PR 15375 at commit
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/15365
@felixcheung I fixed the cran errors. It is ready to review now. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11601
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11601
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66476/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11601
**[Test build #66476 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66476/consoleFull)**
for PR 11601 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15329
Hi @yhuai and @cloud-fan , I recall changing codes here was reviewed by you
both. Do you mind if I ask to review this please?
---
If your project is set up for it, you can reply to this email
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15246#discussion_r82317624
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -17,6 +17,7 @@
package
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15246#discussion_r82317563
--- Diff: core/src/test/scala/org/apache/spark/SparkFunSuite.scala ---
@@ -41,6 +43,15 @@ abstract class SparkFunSuite
}
}
+ //
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/15249
Thinking more, and based on what @squito mentioned, I was considering the
following :
Since we are primarily dealing with executor or nodes which are 'bad' as
opposed to recoverable
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15361
Hi @kxepal , I just tested (copied and pasted) the codes below:
```scala
import org.apache.spark.sql.SparkSession
import spark.implicits._
val spark =
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15366
**[Test build #66480 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66480/consoleFull)**
for PR 15366 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15387
**[Test build #66479 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66479/consoleFull)**
for PR 15387 at commit
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15355
@zsxwing good eye, thanks. It's not that auto.offset.reset.earliest
doesn't work, it's that there's a potential race condition that poll gets
called twice slowly enough for consumer position to
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/15366
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15387
**[Test build #66477 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66477/consoleFull)**
for PR 15387 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66478 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66478/consoleFull)**
for PR 15307 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15379
+1 for this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user koeninger opened a pull request:
https://github.com/apache/spark/pull/15387
[SPARK-17782][STREAMING][KAFKA] eliminate race condition of poll twice
## What changes were proposed in this pull request?
Kafka consumers can't subscribe or maintain heartbeat without
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15366
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66469/
Test FAILed.
---
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15379#discussion_r82315908
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -289,8 +289,8 @@ def text(self, paths):
[Row(value=u'hello'), Row(value=u'this')]
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15366
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15366
**[Test build #66469 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66469/consoleFull)**
for PR 15366 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66475 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66475/consoleFull)**
for PR 15307 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66475/
Test FAILed.
---
1 - 100 of 482 matches
Mail list logo