Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19160
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82016/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17819
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17819
**[Test build #82016 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82016/testReport)**
for PR 17819 at commit
Github user logannc commented on the issue:
https://github.com/apache/spark/pull/18945
Hm. Where would I add tests?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19160
Thanks all for your review, let me merge to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140153330
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19257
Maybe we need to rethink about the planning phase for adding shuffles. How
about we add a placeholder for shuffle node and then replace the placeholder
with actual shuffle node in
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/19290
I think this is great to have, thanks for solving the mystery.
- 5 min: this is mildly concerning, is it possible this is caused by new
checks in lintr? perhaps we could exclude them or
Github user bikassaha commented on a diff in the pull request:
https://github.com/apache/spark/pull/19294#discussion_r140151855
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
---
@@ -130,17 +130,21 @@ class
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19290#discussion_r140151179
--- Diff: dev/lint-r.R ---
@@ -28,6 +28,7 @@ if (! library(SparkR, lib.loc = LOCAL_LIB_LOC,
logical.return = TRUE)) {
# NOTE: The CRAN's version
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19290#discussion_r140150499
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2649,15 +2651,15 @@ setMethod("merge",
#' @return list of columns
#'
#' @note
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19290#discussion_r140150229
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2594,12 +2596,12 @@ setMethod("merge",
} else {
# if by or both by.x and by.y
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19290#discussion_r140150600
--- Diff: R/pkg/R/context.R ---
@@ -329,7 +329,7 @@ spark.addFile <- function(path, recursive = FALSE) {
#' spark.getSparkFilesRootDirectory()
Github user ConeyLiu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19285#discussion_r140149449
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -354,63 +401,30 @@ private[spark] class MemoryStore(
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/18659
@BryanCutler Hmm, I'm not exactly sure the reason why it doesn't work (or
mine works) but we can use `fillna(0)` before casting like:
```
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140148777
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140148164
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140147933
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18994
**[Test build #82019 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82019/testReport)**
for PR 18994 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19294
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82015/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19294
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19294
**[Test build #82015 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82015/testReport)**
for PR 19294 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19298
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19298
LGTM
Merging to master/2.2
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18945
@HyukjinKwon I can take over this if @logannc can't find time to continue
it.
---
-
To unsubscribe, e-mail:
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18945
We also need a proper test for this.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140144263
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140143875
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19287
**[Test build #82018 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82018/testReport)**
for PR 19287 at commit
Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/19287#discussion_r140143293
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala ---
@@ -66,6 +66,12 @@ class TaskInfo(
*/
var finishTime: Long =
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19300
I am fixing some codes around here -
https://github.com/apache/spark/pull/19290/files#diff-d9f92e07db6424e2527a7f9d7caa9013R328.
If this one is only the one, let me fold this into mine.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19219
**[Test build #82017 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82017/testReport)**
for PR 19219 at commit
Github user a10y commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140142458
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19106#discussion_r140142353
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/ProbabilisticClassifier.scala
---
@@ -230,21 +230,19 @@ private[ml] object
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19300
Please review and fix typos in a whole batch of code, or don't bother and
close this
---
-
To unsubscribe, e-mail:
Github user a10y commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140141889
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19285#discussion_r140141918
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -354,63 +401,30 @@ private[spark] class MemoryStore(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17819
**[Test build #82016 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82016/testReport)**
for PR 17819 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19286
@gatorsmile Does the added test look good for you? Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15544#discussion_r140140577
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervals.scala
---
@@ -0,0 +1,235 @@
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19298
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82014/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19298
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19298
**[Test build #82014 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82014/testReport)**
for PR 19298 at commit
Github user logannc commented on the issue:
https://github.com/apache/spark/pull/18945
Sorry I feel off the face of the earth. I finally had some time to sit down
and do this. I took your suggestions but implemented it a little differently.
Unless I've made a dumb mistake, I think I
Github user squito commented on the issue:
https://github.com/apache/spark/pull/19280
> Looks ok to me, assuming the "default serializer" in SerializerManager is
configured correctly through other means.
I think that part is fine. The serializer is created here:
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/19277#discussion_r140137420
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -351,14 +351,14 @@ private[spark] object
GitHub user zuotingbing opened a pull request:
https://github.com/apache/spark/pull/19300
[SPARK-22082][SparkR]Spelling mistake: choosen in API doc of R.
## What changes were proposed in this pull request?
"choosen" should be "chosen" in API doc of R.
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/14325
ping @gatorsmile Add this to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/19194#discussion_r140136101
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -619,6 +625,47 @@ private[spark] class ExecutorAllocationManager(
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/14325
You should override `override def evaluate(dataset: Dataset[_])` (without
the label param).
---
-
To unsubscribe, e-mail:
Github user yssharma commented on the issue:
https://github.com/apache/spark/pull/18029
@budde could you please do one last review of this one.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19106
ping @srowen Any other comments ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19288
OK I agree.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19294
**[Test build #82015 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82015/testReport)**
for PR 19294 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19294
cc @jiangxb1987 who I believe is interested in this. Without a super close
look, it looks making sense.
---
-
To
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19294
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15544#discussion_r140131809
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervals.scala
---
@@ -0,0 +1,235 @@
Github user wankunde closed the pull request at:
https://github.com/apache/spark/pull/19299
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
GitHub user wankunde opened a pull request:
https://github.com/apache/spark/pull/19299
update from upstream
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain how
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/19288
@WeichenXu123 It maybe better to destory intermediate objects ASAP
---
-
To unsubscribe, e-mail:
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15544#discussion_r140129058
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervals.scala
---
@@ -0,0 +1,235 @@
Github user fjh100456 commented on the issue:
https://github.com/apache/spark/pull/19218
@gatorsmile @dongjoon-hyun
I'd fix it. Could you helpe me to review it again? Thanks.
---
-
To unsubscribe, e-mail:
Github user ConeyLiu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19285#discussion_r140126755
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -354,63 +401,30 @@ private[spark] class MemoryStore(
Github user zhouyejoe commented on the issue:
https://github.com/apache/spark/pull/17412
@rdblue Hi, why not the blockstatusupdates are not filtering out in
executorMetricsUpdate? This line
Github user dhruve commented on a diff in the pull request:
https://github.com/apache/spark/pull/19194#discussion_r140122744
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -1255,6 +1255,97 @@ class TaskSetManagerSuite extends
Github user dhruve commented on a diff in the pull request:
https://github.com/apache/spark/pull/19194#discussion_r140124886
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -619,6 +625,47 @@ private[spark] class ExecutorAllocationManager(
Github user dhruve commented on a diff in the pull request:
https://github.com/apache/spark/pull/19194#discussion_r140122769
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -1255,6 +1255,97 @@ class TaskSetManagerSuite extends
Github user dhruve commented on a diff in the pull request:
https://github.com/apache/spark/pull/19194#discussion_r140123047
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -758,11 +825,52 @@ private[spark] class ExecutorAllocationManager(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19298
**[Test build #82014 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82014/testReport)**
for PR 19298 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19298
cc @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/19298
[SPARK-22076][SQL][followup] Expand.projections should not be a Stream
## What changes were proposed in this pull request?
This a follow-up of https://github.com/apache/spark/pull/19289
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18659#discussion_r140121928
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/python/EvalPythonExec.scala
---
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19272#discussion_r140118143
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCredentialRenewer.scala
---
@@ -63,7 +63,8 @@ class
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19272#discussion_r140117834
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -198,16 +198,19
Github user susanxhuynh commented on a diff in the pull request:
https://github.com/apache/spark/pull/19272#discussion_r140117253
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCredentialRenewer.scala
---
@@ -63,7 +63,8 @@ class
Github user susanxhuynh commented on a diff in the pull request:
https://github.com/apache/spark/pull/19272#discussion_r140117055
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -198,16
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18945
@BryanCutler, @a10y and @viirya, would you guys be interested in this and
have some time to take over this with the different approach we discussed above
-
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19141
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19141
(Also merging to 2.2.)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19141
LGTM, merging to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18659
@ueshin I haven't had much luck with the casting workaround:
```
pa.Array.from_pandas(s.astype(t.to_pandas_dtype(), copy=False),
mask=s.isnull(), type=t)
```
It appears that it
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19297
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19297
Merging to master to unbreak the build.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19297
**[Test build #82013 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82013/testReport)**
for PR 19297 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19297
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82013/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19297
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19271
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19271
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82011/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19271
**[Test build #82011 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82011/testReport)**
for PR 19271 at commit
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18924#discussion_r140111453
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
@@ -503,17 +518,15 @@ final class OnlineLDAOptimizer extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19297
**[Test build #82013 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82013/testReport)**
for PR 19297 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19297
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19280
Looks ok to me, assuming the "default serializer" in SerializerManager is
configured correctly through other means.
Title would sound better with a possessive: "SerializerManager's private
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19297
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82012/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19297
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19297
**[Test build #82012 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82012/testReport)**
for PR 19297 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19271
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19271
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82010/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19271
**[Test build #82010 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82010/testReport)**
for PR 19271 at commit
1 - 100 of 446 matches
Mail list logo