[ https://issues.apache.org/jira/browse/SPARK-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927085#comment-15927085 ]
Apache Spark commented on SPARK-13369: -------------------------------------- User 'sitalkedia' has created a pull request for this issue: https://github.com/apache/spark/pull/17307 > Number of consecutive fetch failures for a stage before the job is aborted > should be configurable > -------------------------------------------------------------------------------------------------- > > Key: SPARK-13369 > URL: https://issues.apache.org/jira/browse/SPARK-13369 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 1.6.0 > Reporter: Sital Kedia > Priority: Minor > > Currently it is hardcode inside code. We need to make it configurable because > for long running jobs, the chances of fetch failures due to machine reboot is > high and we need a configuration parameter to bump up that number. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org