Shixiong Zhu created SPARK-25569: ------------------------------------ Summary: Failing a Spark job when an accumulator cannot be updated Key: SPARK-25569 URL: https://issues.apache.org/jira/browse/SPARK-25569 Project: Spark Issue Type: Improvement Components: Spark Core Affects Versions: 2.4.0 Reporter: Shixiong Zhu
Currently, when Spark fails to merge an accumulator updates from a task, it will not fail the task. (See https://github.com/apache/spark/blob/b7d80349b0e367d78cab238e62c2ec353f0f12b3/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L1266) So an accumulator update failure may be ignored silently. Some user may want to use accumulators in business critical things, and would like to fail a job when an accumulator is broken. We can add a flag to always fail a Spark job when hitting an accumulator failure. Or we can add a new property to an accumulator and only fail a spark job when such accumulator fails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org