[ https://issues.apache.org/jira/browse/SPARK-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-6353. ------------------------------ Resolution: Duplicate > Handling fatal errors of executors and decommission datanodes > ------------------------------------------------------------- > > Key: SPARK-6353 > URL: https://issues.apache.org/jira/browse/SPARK-6353 > Project: Spark > Issue Type: Improvement > Components: Spark Core, YARN > Reporter: Jianshi Huang > > We're facing "No space left on device" errors lately from time to time. The > job will fail after retries. Obvious in such case, retry won't be helpful. > Sure it's the problem in the datanodes but I'm wondering if Spark Driver can > handle it and decommission the problematic datanode before retrying it. And > maybe dynamically allocate another datanode if dynamic allocation is enabled. > I think there needs to be a class of fatal errors that can't be recovered > with retries. And it's best Spark can handle it nicely. > Jianshi -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org