Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17811
LGTM, merging to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17540
The last commit fails a lot of tests...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17829
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17829
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76371/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17829
**[Test build #76371 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76371/testReport)**
for PR 17829 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17827
**[Test build #76374 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76374/testReport)**
for PR 17827 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17825
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17825
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76373/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17825
**[Test build #76373 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76373/testReport)**
for PR 17825 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17828
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76372/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17828
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17828
**[Test build #76372 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76372/testReport)**
for PR 17828 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17827
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17827
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76370/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17827
**[Test build #76370 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76370/testReport)**
for PR 17827 at commit
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17825
That makes sense I guess. It would be great to have more control over the
layout though. One can dream, right? :)
Thank you so much for all the reviews and information.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17825
**[Test build #76373 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76373/testReport)**
for PR 17825 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17829
**[Test build #76371 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76371/testReport)**
for PR 17829 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17828
**[Test build #76372 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76372/testReport)**
for PR 17828 at commit
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/17715
@jkbradley Thanks for your comments. I sent #17829 to address them, please
feel free to review. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17715#discussion_r114248399
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -178,11 +178,86 @@ private[classification] trait
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17818#discussion_r114248432
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1656,6 +1656,18 @@ test_that("greatest() and least() on a DataFrame", {
GitHub user yanboliang opened a pull request:
https://github.com/apache/spark/pull/17829
[SPARK-20047][FOLLOWUP][ML] Constrained Logistic Regression follow up
## What changes were proposed in this pull request?
Address some minor comments for #17715:
* Put bound-constrained
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/17828
[SPARK-20490][SPARKR][DOC] add family tag for not function
## What changes were proposed in this pull request?
doc only
## How was this patch tested?
manual
You can
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17818
> hmm, not clear why AppVeyor failed. you could trigger it again by closing
and re-opening this PR
without affecting Jenkins
Look I'll have to rebase it anyway but thank you so much for
Github user kevinyu98 commented on the issue:
https://github.com/apache/spark/pull/12646
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user zero323 closed the pull request at:
https://github.com/apache/spark/pull/17818
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user zero323 reopened a pull request:
https://github.com/apache/spark/pull/17818
[SPARK-20544] R wrapper for input_file_name
## What changes were proposed in this pull request?
Adds wrapper for `o.a.s.sql.functions.input_file_name`
## How was this patch
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17383
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17807
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17807
merged to master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17818#discussion_r114246581
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1656,6 +1656,18 @@ test_that("greatest() and least() on a DataFrame", {
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17818#discussion_r114246336
--- Diff: R/pkg/R/functions.R ---
@@ -3890,3 +3890,23 @@ setMethod("not",
jc <- callJStatic("org.apache.spark.sql.functions", "not",
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17818
hmm, not clear why AppVeyor failed. you could trigger it again by closing
and re-opening this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17825#discussion_r114245870
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,24 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17825#discussion_r114245853
--- Diff: R/pkg/R/column.R ---
@@ -132,16 +132,23 @@ createMethods()
#' alias
#'
-#' Set a new name for a column
+#' Set a new
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17825#discussion_r114245818
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2253,6 +2253,15 @@ test_that("mutate(), transform(), rename() and
names()", {
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17825#discussion_r114245756
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2253,6 +2253,15 @@ test_that("mutate(), transform(), rename() and
names()", {
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17825#discussion_r114245780
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3715,3 +3715,24 @@ setMethod("rollup",
sgd <- callJMethod(x@sdf, "rollup", jcol)
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17816
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17816
thanks
merged to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17827
**[Test build #76370 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76370/testReport)**
for PR 17827 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17827
cc @gatorsmile and @ptkool, could you take a look and see if it makes sense
please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17827#discussion_r114244380
--- Diff: python/pyspark/sql/column.py ---
@@ -224,7 +224,39 @@ def __init__(self, jc):
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17827#discussion_r114244470
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -284,23 +287,6 @@ class ColumnExpressionSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17827#discussion_r114244526
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -284,23 +287,6 @@ class ColumnExpressionSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17827#discussion_r114244337
--- Diff: python/pyspark/sql/column.py ---
@@ -224,7 +224,39 @@ def __init__(self, jc):
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17827
[SPARK-20552][SQL][PYTHON] Add isNotDistinctFrom/isDistinctFrom for column
APIs in Scala and Python
## What changes were proposed in this pull request?
This PR proposes to add both
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17746
@dbtsai Thanks for the explanation and the context :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76367/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #76367 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76367/testReport)**
for PR 17540 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17770
It is possible as I think `resolveOperators` works as the same as this
analysis barrier + `transformUp`. However, `resolveOperators` is widely used
now so we may not have urgent need to remove it.
Github user leonfl commented on the issue:
https://github.com/apache/spark/pull/16486
@mrjrdnthms ,Yes, your understand is correct, in scala it like this:
```
val rows: RDD[Row] = df.rdd.map(
rowIn => {
// handle the rowIn and return a Row
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76368/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #76368 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76368/testReport)**
for PR 17540 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17825
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76369/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17825
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17825
**[Test build #76369 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76369/testReport)**
for PR 17825 at commit
Github user facaiy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17556#discussion_r114239173
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
@@ -1009,10 +1009,17 @@ private[spark] object RandomForest extends
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114238664
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution {
}
Github user facaiy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17556#discussion_r114238091
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
@@ -1009,10 +1009,24 @@ private[spark] object RandomForest extends
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17824
Yes `SparkStatusTrack` will keep working the same way it does today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17723
It would be easier if instead you said what particular API that is being
proposed here you have issues with. Is it the storing of credentials in UGI? Or
what?
If that's what you're saying.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17826
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76366/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17826
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17826
**[Test build #76366 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76366/testReport)**
for PR 17826 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17824
Thanks @vanzin ! `SparkStatusTracker` depends on `JobProgressListener`
which was already deprecated, will you remove this `JobProgressListener` and
rewrite `SparkStatusTrack`?
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17825
**[Test build #76369 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76369/testReport)**
for PR 17825 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17824
I'm planning to remove these listeners completely in 2.3. (StorageStatus,
which is the theme of this particular PR, is not currently in my crosshair, but
I my decide to clean that one up also.)
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17764
I filed [SPARK-20552](https://issues.apache.org/jira/browse/SPARK-20552)
for column APIs in both Python and Scala/Java. Do you mind if I work on this if
any of you, @gatorsmile and @ptkool, is
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17540
Yes, that's reasonable. I was asking because I noticed `withNewExecutionId`
was added in `hiveResultString` method so it should have been fixed.
---
If your project is set up for it, you can
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17746
The motivation to have this one merged in Spark 2.2 is not only just for
#17715 but also because Breeze 0.13.x fixes many bugs in upstream. Since Spark
was tightened to 0.12, many users (including
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17824
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17824
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76364/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17824
**[Test build #76364 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76364/testReport)**
for PR 17824 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17824
@vanzin are we going to remove these listeners in future, or just keep them
as deprecated? Some projects like Zeppelin explicitly depends on these
listeners not only for code simplicity, but also
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114235457
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution {
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #76368 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76368/testReport)**
for PR 17540 at commit
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114235126
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution {
}
r
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #76367 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76367/testReport)**
for PR 17540 at commit
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
@carsonwang, the plan for when we notice queries that don't appear in the
SQL tab is to add a call to `checkSQLExecutionId`, which will cause tests to
fail when that operation isn't wrapped by
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114234728
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution {
}
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
@vanzin , since branch 2.0 doesn't have this feature in UI
(https://issues.apache.org/jira/browse/SPARK-11272), so I don't think it is
required to fix in branch-2.0.
---
If your project is set
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17540
Hi @rdblue , just wanted to confirm that this also fixed #17535 so we
should have the UI when executing queries in Spark SQL CLI?
---
If your project is set up for it, you can reply to this
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
Rebased. I'll check if tests pass later tonight.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17819#discussion_r114234373
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1883,6 +1883,56 @@ class Dataset[T] private[sql](
}
/**
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17540
yea I think it's ready to go, @rdblue can you bring it up to date?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17770
after we have this, can we remove the `resolveOperators`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17823
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76362/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17823
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17823
**[Test build #76362 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76362/testReport)**
for PR 17823 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17823
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17823
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76361/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17823
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17823
**[Test build #76361 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76361/testReport)**
for PR 17823 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17735
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76363/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17735
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17735
**[Test build #76363 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76363/testReport)**
for PR 17735 at commit
1 - 100 of 292 matches
Mail list logo