Github user artemrd commented on the issue:
https://github.com/apache/spark/pull/21354
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user artemrd opened a pull request:
https://github.com/apache/spark/pull/21354
[Web UI] Do not skip cells in Tasks table on Stage page when accumulators
are not available
[Web UI] Do not skip cells in Tasks table on Stage page when accumulators
are not available
Github user artemrd commented on the issue:
https://github.com/apache/spark/pull/21114
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r188170023
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -237,6 +236,65 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user artemrd commented on the issue:
https://github.com/apache/spark/pull/21114
There's "get accum" test which does this, it was updated for new behavior.
---
-
To unsubscribe, e-mail: r
Github user artemrd commented on the issue:
https://github.com/apache/spark/pull/21114
Just a long-running job and memory pressure is not enough. You need to have
several attempts for a stage, each new attempt will update Stage._latestInfo,
so previous StageInfo and it's accumul
Github user artemrd commented on the issue:
https://github.com/apache/spark/pull/21114
This issue is more like a race condition, so the test needs to generate a
specific sequence of events to reproduce the issue. I agree it's probably too
specific. What is Spark approach to repr
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r187970775
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -237,6 +236,65 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user artemrd commented on the issue:
https://github.com/apache/spark/pull/21114
Yes, this is correct.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r187821160
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -237,6 +236,65 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r186896584
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -209,10 +209,8 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r185036905
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -209,10 +209,8 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r183801936
--- Diff: core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala ---
@@ -258,14 +258,8 @@ private[spark] object AccumulatorContext
Github user artemrd commented on a diff in the pull request:
https://github.com/apache/spark/pull/21114#discussion_r183801686
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -209,10 +209,8 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
GitHub user artemrd opened a pull request:
https://github.com/apache/spark/pull/21114
[SPARK-22371][CORE] Return None instead of throwing an exception when an
accumulator is garbage collected.
## What changes were proposed in this pull request?
There's a period of
15 matches
Mail list logo