Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17495
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17495
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75684/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17495
**[Test build #75684 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75684/testReport)**
for PR 17495 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17436
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75685/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17436
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17436
**[Test build #75685 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75685/testReport)**
for PR 17436 at commit
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17596
My comments were based on a fix in 1.6; actually lot of values were
actually observed to be 0 for a lot of cases - just a few were not (even here
it is relevant - resultSize, gctime, various bytes
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
Ok, I have modified the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17596
@mridulm I actually have the same plan, I think it's an overkill to
implement TaskMetrics with accumulators, we don't need to merge the accumulator
updates at driver side for TaskMetrics
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110814617
--- Diff: docs/sql-programming-guide.md ---
@@ -883,7 +883,7 @@ Configuration of Parquet can be done using the
`setConf` method on `SparkSession
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110814635
--- Diff: docs/sql-programming-guide.md ---
@@ -897,7 +897,7 @@ For a regular multi-line JSON file, set the `wholeFile`
option to `true`.
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r110813819
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -359,6 +359,16 @@ object SQLConf {
.booleanConf
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17602
**[Test build #75692 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75692/testReport)**
for PR 17602 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110812638
--- Diff: docs/sql-programming-guide.md ---
@@ -897,7 +897,7 @@ For a regular multi-line JSON file, set the `wholeFile`
option to `true`.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17602
**[Test build #75691 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75691/testReport)**
for PR 17602 at commit
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17600
Merged into mater.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17601
Merged into master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110811804
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -634,7 +634,9 @@ def saveAsTable(self, name, format=None, mode=None,
partitionBy=None, **options)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17602
**[Test build #75690 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75690/testReport)**
for PR 17602 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110811551
--- Diff: docs/sql-programming-guide.md ---
@@ -883,7 +883,7 @@ Configuration of Parquet can be done using the
`setConf` method on `SparkSession
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17596#discussion_r110811597
--- Diff:
core/src/main/scala/org/apache/spark/util/InternalLongAccumulator.scala ---
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110811091
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -268,8 +268,8 @@ class DataFrameReader private[sql](sparkSession:
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110810961
--- Diff: python/pyspark/sql/streaming.py ---
@@ -405,8 +405,8 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None,
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110810827
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -634,7 +634,9 @@ def saveAsTable(self, name, format=None, mode=None,
partitionBy=None, **options)
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17596#discussion_r110810746
--- Diff:
core/src/main/scala/org/apache/spark/util/InternalLongAccumulator.scala ---
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110810554
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -173,8 +173,8 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17602
**[Test build #75689 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75689/testReport)**
for PR 17602 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110810413
--- Diff: docs/sql-programming-guide.md ---
@@ -897,7 +897,7 @@ For a regular multi-line JSON file, set the `wholeFile`
option to `true`.
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17602#discussion_r110810364
--- Diff: docs/sql-programming-guide.md ---
@@ -883,7 +883,7 @@ Configuration of Parquet can be done using the
`setConf` method on `SparkSession
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17602
[MINOR][DOCS] JSON APIs related documentation fixes
## What changes were proposed in this pull request?
This PR proposes corrections related to JSON APIs, including rendering
links in
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17599
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17491
**[Test build #75687 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75687/testReport)**
for PR 17491 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16677
**[Test build #75688 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75688/testReport)**
for PR 16677 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17599
Merging in master/branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17491
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16781
**[Test build #75686 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75686/testReport)**
for PR 16781 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17491
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75683/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17491
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17491
**[Test build #75683 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75683/testReport)**
for PR 17491 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17581
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75682/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17581
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17581
**[Test build #75682 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75682/testReport)**
for PR 17581 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17599
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17599
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75678/
Test PASSed.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
@tgravescs sorry for the confuse.
>if base URL's ACL (spark.acls.enable) is enabled but user A has no view
permission. User "A" cannot see the app list but could still access details of
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17599
**[Test build #75678 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75678/testReport)**
for PR 17599 at commit
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/17556
@srowen Hi, I forget unit tests in python and R. Where can I find document
about creating develop environment? thanks.
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17600
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75679/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17600
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17600
**[Test build #75679 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75679/testReport)**
for PR 17600 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17495
**[Test build #75684 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75684/testReport)**
for PR 17495 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17436
**[Test build #75685 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75685/testReport)**
for PR 17436 at commit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110804952
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17601
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75680/
Test PASSed.
---
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110804557
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17601
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17601
**[Test build #75680 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75680/consoleFull)**
for PR 17601 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17563
Does the PR nobody deal with it?@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110803588
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17599
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110802758
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75681/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75681 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75681/testReport)**
for PR 17540 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110801458
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17480
Sorry but that doesn't really explain much. Why is it bad to ramp up
quickly? At which point are things not "initializing" anymore?
Isn't the AM restarting the definition of "I should ramp
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/17436#discussion_r110801228
--- Diff: core/src/main/java/org/apache/spark/memory/MemoryConsumer.java ---
@@ -41,7 +41,7 @@ protected MemoryConsumer(TaskMemoryManager
taskMemoryManager,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17491
**[Test build #75683 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75683/testReport)**
for PR 17491 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r110800502
--- Diff: python/pyspark/sql/column.py ---
@@ -303,8 +342,27 @@ def isin(self, *cols):
desc = _unary_op("desc", "Returns a sort expression
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r110800343
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,50 @@ def __iter__(self):
raise TypeError("Column is not iterable")
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r110799889
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
---
@@ -571,6 +572,34 @@ class FsHistoryProviderSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17524
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17593
sorting can change?I do not think so.
Even if the sorting is only show the last 200, and I raised the issue is
not contradictory.
The last 200 are the concept of a batch of data.
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110797585
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110796578
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17581
**[Test build #75682 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75682/testReport)**
for PR 17581 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17581
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r110795341
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -359,6 +359,15 @@ object SQLConf {
.booleanConf
Github user map222 commented on the issue:
https://github.com/apache/spark/pull/17469
@HyukjinKwon The Jenkins test failed. I'm having trouble running the tests
locally (I can't build Spark yet), and I can't decipher the Jenkins error
messages. Does something jump out to you?
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17581
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75681 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75681/testReport)**
for PR 17540 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17077
(I think we need @holdenk's sign-off and further review.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110794303
--- Diff: python/pyspark/sql/tests.py ---
@@ -2167,6 +2167,61 @@ def test_BinaryType_serialization(self):
df =
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r110794078
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -320,14 +321,15 @@ private[history] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17600
**[Test build #75679 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75679/testReport)**
for PR 17600 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17601
**[Test build #75680 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75680/consoleFull)**
for PR 17601 at commit
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/17601
[MINOR][SQL] Fix the @since tag when backporting critical bugs from 2.2
branch into 2.0 branch
## What changes were proposed in this pull request?
Fix the @since tag when backporting
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75677/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75677 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75677/testReport)**
for PR 17540 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17077#discussion_r110792594
--- Diff: python/pyspark/sql/tests.py ---
@@ -2167,6 +2167,56 @@ def test_BinaryType_serialization(self):
df =
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/17600
[MINOR][SQL] Fix the @since tag when backporting critical bugs from 2.2
branch into 2.1 branch
## What changes were proposed in this pull request?
Fix the @since tag when backporting
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17599
**[Test build #75678 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75678/testReport)**
for PR 17599 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17599
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17596
The approach I took for this was slightly different.
* Create a bitmask indicating which accumulators are required in
TaskMetrics - that is, have non zero values, and emit this first.
*
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/17599
[SPARK-17564][Tests]Fix flaky
RequestTimeoutIntegrationSuite.furtherRequestsDelay
## What changes were proposed in this pull request?
This PR fixs the following failure:
```
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75677 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75677/testReport)**
for PR 17540 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17546
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75674/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17546
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17546
**[Test build #75674 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75674/testReport)**
for PR 17546 at commit
1 - 100 of 402 matches
Mail list logo