Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17736
LGTM. Thanks. @cloud-fan @rxin this fixes our production jobs when we port
our applications from 1.6 to 2.0. I think it's a important bug fix. Thanks.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17736
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76091/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17736
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17736
**[Test build #76091 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76091/testReport)**
for PR 17736 at commit
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/17556
I scanned split critical of sklearn and xgboost.
1. sklearn
count all continuous values and split at mean value.
commit 5147fd09c6a063188efde444f47bd006fa5f95f0
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17737
**[Test build #76095 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76095/testReport)**
for PR 17737 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17738
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user unsleepy22 opened a pull request:
https://github.com/apache/spark/pull/17738
[SPARK-20422][Spark Core] Worker registration retries should be configurable
## What changes were proposed in this pull request?
make prolonged registration retries configurable
##
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17737
cc @srowen, @holdenk, @felixcheung, @map222 and @zero323 who were in
related PRs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861995
--- Diff: python/pyspark/sql/column.py ---
@@ -337,26 +381,39 @@ def isin(self, *cols):
return Column(jc)
# order
-
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861905
--- Diff: python/pyspark/sql/column.py ---
@@ -185,17 +185,52 @@ def __contains__(self, item):
"in a string column or
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861876
--- Diff: python/pyspark/sql/column.py ---
@@ -185,17 +185,52 @@ def __contains__(self, item):
"in a string column or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17736
**[Test build #76094 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76094/testReport)**
for PR 17736 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861887
--- Diff: python/pyspark/sql/column.py ---
@@ -185,17 +185,52 @@ def __contains__(self, item):
"in a string column or
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112860925
--- Diff: python/pyspark/sql/column.py ---
@@ -185,17 +185,52 @@ def __contains__(self, item):
"in a string column or
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861613
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/bitwiseExpressions.scala
---
@@ -86,7 +86,7 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112860980
--- Diff: python/pyspark/sql/column.py ---
@@ -251,15 +286,16 @@ def __iter__(self):
# string methods
_rlike_doc = """
-
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861744
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -1008,7 +1009,7 @@ class Column(val expr: Expression) extends Logging {
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861531
--- Diff: python/pyspark/sql/column.py ---
@@ -337,26 +381,39 @@ def isin(self, *cols):
return Column(jc)
# order
-
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861035
--- Diff: python/pyspark/sql/column.py ---
@@ -288,8 +324,16 @@ def __iter__(self):
>>> df.filter(df.name.endswith('ice$')).collect()
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861566
--- Diff: python/pyspark/sql/column.py ---
@@ -527,7 +584,7 @@ def _test():
.appName("sql.column tests")\
.getOrCreate()
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861532
--- Diff: python/pyspark/sql/column.py ---
@@ -337,26 +381,39 @@ def isin(self, *cols):
return Column(jc)
# order
-
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112861399
--- Diff: python/pyspark/sql/column.py ---
@@ -269,17 +305,17 @@ def __iter__(self):
[Row(age=2, name=u'Alice')]
"""
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112860981
--- Diff: python/pyspark/sql/column.py ---
@@ -269,17 +305,17 @@ def __iter__(self):
[Row(age=2, name=u'Alice')]
"""
--- End diff
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112860906
--- Diff: python/pyspark/sql/column.py ---
@@ -185,17 +185,52 @@ def __contains__(self, item):
"in a string column or
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17737#discussion_r112860927
--- Diff: python/pyspark/sql/column.py ---
@@ -185,17 +185,52 @@ def __contains__(self, item):
"in a string column or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17737
**[Test build #76093 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76093/testReport)**
for PR 17737 at commit
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/17649
@gatorsmile Hive treats comment simply as a key in the string-string
parameter map, while spark extracts comment from the map as a field in
`CatalogTable`. So the question is, should spark consider
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17737
**[Test build #76092 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76092/testReport)**
for PR 17737 at commit
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17737
[SPARK-20442][PYTHON][DOCS] Fill up documentations for functions in Column
API in PySpark
## What changes were proposed in this pull request?
This PR proposes to fill up the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17480
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76089/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17480
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17480
**[Test build #76089 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76089/testReport)**
for PR 17480 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15125
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76088/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15125
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15125
**[Test build #76088 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76088/testReport)**
for PR 15125 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17736
Let's see if it breaks any existing tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17736
**[Test build #76091 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76091/testReport)**
for PR 17736 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17736
cc @dbtsai @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/17736
[SPARK-20399][SQL][WIP] Can't use same regex pattern between 1.6 and 2.x
due to unescaped sql string in parser
## What changes were proposed in this pull request?
The new SQL parser is
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17708
I have the same question as Reynold asked in the mailing list. Doesn't
common sub expression elimination already address this issue?
---
If your project is set up for it, you can reply to this
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17540
This PR will change the Spark UI.
For a simple query `Seq(1 -> "a").toDF("i", "j").write.parquet("/tmp/a")`,
previously the SQL tab of Spark UI will show
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17733
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76087/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17733
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17733
**[Test build #76087 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76087/testReport)**
for PR 17733 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17623#discussion_r112856025
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeExtractors.scala
---
@@ -111,6 +111,11 @@ case class
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17623#discussion_r112854488
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeExtractors.scala
---
@@ -111,6 +111,11 @@ case class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17728
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17728
**[Test build #76090 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76090/testReport)**
for PR 17728 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17728
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76090/
Test PASSed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17730#discussion_r112854341
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -197,7 +211,11 @@ class CatalogImpl(sparkSession: SparkSession)
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112853834
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1546,6 +1546,40 @@ test_that("string operators", {
expect_equal(collect(select(df3,
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112853719
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112853686
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112853256
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17728
**[Test build #76090 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76090/testReport)**
for PR 17728 at commit
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/17463
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17480
**[Test build #76089 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76089/testReport)**
for PR 17480 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15125
**[Test build #76088 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76088/testReport)**
for PR 15125 at commit
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112849184
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -135,7 +139,8 @@ class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17733
**[Test build #76087 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76087/testReport)**
for PR 17733 at commit
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112848855
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -135,7 +139,8 @@ class
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112848762
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -135,7 +139,8 @@ class
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112848595
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -135,7 +139,8 @@ class
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112848401
--- Diff:
external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDDSuite.scala
---
@@ -101,6 +103,36 @@ abstract class
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112848373
--- Diff:
external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDDSuite.scala
---
@@ -101,6 +103,36 @@ abstract class
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112848363
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -135,7 +139,8 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17695
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17695
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76086/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17695
**[Test build #76086 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76086/testReport)**
for PR 17695 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17695
**[Test build #76086 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76086/testReport)**
for PR 17695 at commit
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17695
Thanks for the info @srowen - this should be better now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112845286
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112844527
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -308,6 +308,21 @@ numCyl <- summarize(groupBy(carsDF, carsDF$cyl), count
= n(carsDF$cyl))
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112844471
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint", as.logical(eager))
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112844452
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint", as.logical(eager))
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17649
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76085/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17649
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17649
**[Test build #76085 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76085/testReport)**
for PR 17649 at commit
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112841794
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -135,7 +139,8 @@ class
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112841862
--- Diff:
external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDDSuite.scala
---
@@ -101,6 +103,36 @@ abstract class
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112841727
--- Diff: docs/streaming-kinesis-integration.md ---
@@ -216,3 +216,7 @@ de-aggregate records during consumption.
- If no Kinesis checkpoint info exists
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112841869
--- Diff:
external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDDSuite.scala
---
@@ -101,6 +103,36 @@ abstract class
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112841668
--- Diff: docs/streaming-kinesis-integration.md ---
@@ -216,3 +216,7 @@ de-aggregate records during consumption.
- If no Kinesis checkpoint info exists
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17649
**[Test build #76085 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76085/testReport)**
for PR 17649 at commit
Github user sujith71955 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17649#discussion_r112840921
--- Diff:
sql/core/src/test/resources/sql-tests/inputs/describe-table-after-alter-table.sql
---
@@ -0,0 +1,29 @@
+CREATE TABLE table_with_comment
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17712
Why use a map? That's super unstructured and easy to break ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112839963
--- Diff: docs/streaming-kinesis-integration.md ---
@@ -216,3 +216,7 @@ de-aggregate records during consumption.
- If no Kinesis checkpoint info exists
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/17640
@felixcheung I just came back from vacation. I will make changes now.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user vijoshi commented on the issue:
https://github.com/apache/spark/pull/17731
Thanks, I tried this out - looks like doing a `rm(".sparkRsession",
envir=SparkR:::.sparkREnv)` is a way to prevent the infinite loop situation. If
I need to setup an active binding for
Github user sujith71955 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17649#discussion_r112838259
--- Diff:
sql/core/src/test/resources/sql-tests/results/describe-table-after-alter-table.sql.out
---
@@ -0,0 +1,162 @@
+-- Automatically generated
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17733
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76084/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17733
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17733
**[Test build #76084 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76084/testReport)**
for PR 17733 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17733
**[Test build #76084 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76084/testReport)**
for PR 17733 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17734
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17734
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76082/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17734
**[Test build #76082 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76082/testReport)**
for PR 17734 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17733
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76083/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17733
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
1 - 100 of 142 matches
Mail list logo