[jira] [Updated] (SPARK-25569) Failing a Spark job when an accumulator cannot be updated

2020-03-16 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-25569:
--
Affects Version/s: (was: 3.0.0)
   3.1.0

> Failing a Spark job when an accumulator cannot be updated
> -
>
> Key: SPARK-25569
> URL: https://issues.apache.org/jira/browse/SPARK-25569
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> Currently, when Spark fails to merge an accumulator updates from a task, it 
> will not fail the task. (See 
> https://github.com/apache/spark/blob/b7d80349b0e367d78cab238e62c2ec353f0f12b3/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L1266)
>  So an accumulator update failure may be ignored silently. Some user may want 
> to use accumulators in business critical things, and would like to fail a job 
> when an accumulator is broken.
> We can add a flag to always fail a Spark job when hitting an accumulator 
> failure. Or we can add a new property to an accumulator and only fail a spark 
> job when such accumulator fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-25569) Failing a Spark job when an accumulator cannot be updated

2019-07-16 Thread Dongjoon Hyun (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-25569:
--
Affects Version/s: (was: 2.4.0)
   3.0.0

> Failing a Spark job when an accumulator cannot be updated
> -
>
> Key: SPARK-25569
> URL: https://issues.apache.org/jira/browse/SPARK-25569
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.0.0
>Reporter: Shixiong Zhu
>Priority: Major
>
> Currently, when Spark fails to merge an accumulator updates from a task, it 
> will not fail the task. (See 
> https://github.com/apache/spark/blob/b7d80349b0e367d78cab238e62c2ec353f0f12b3/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L1266)
>  So an accumulator update failure may be ignored silently. Some user may want 
> to use accumulators in business critical things, and would like to fail a job 
> when an accumulator is broken.
> We can add a flag to always fail a Spark job when hitting an accumulator 
> failure. Or we can add a new property to an accumulator and only fail a spark 
> job when such accumulator fails.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org