[ https://issues.apache.org/jira/browse/SPARK-14915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260623#comment-15260623 ]
Andrew Or commented on SPARK-14915: ----------------------------------- I haven't looked into the scheduler code in detail yet, but it seems to me the bug is not caused by your fix to use the `CausedBy`. Rather, the bug has always existed, but your fix just uncovered it. It does seem like a problem in the scheduler; under no circumstances should we retry a stage without limits. > Tasks that fail due to CommitDeniedException (a side-effect of speculation) > can cause job to never complete > ----------------------------------------------------------------------------------------------------------- > > Key: SPARK-14915 > URL: https://issues.apache.org/jira/browse/SPARK-14915 > Project: Spark > Issue Type: Bug > Affects Versions: 1.6.2 > Reporter: Jason Moore > Priority: Critical > > In SPARK-14357, code was corrected towards the originally intended behavior > that a CommitDeniedException should not count towards the failure count for a > job. After having run with this fix for a few weeks, it's become apparent > that this behavior has some unintended consequences - that a speculative task > will continuously receive a CDE from the driver, now causing it to fail and > retry over and over without limit. > I'm thinking we could put a task that receives a CDE from the driver, into a > TaskState.FINISHED or some other state to indicated that the task shouldn't > be resubmitted by the TaskScheduler. I'd probably need some opinions on > whether there are other consequences for doing something like this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org