pankaj arora created SPARK-6011:
-----------------------------------

             Summary: Out of disk space due to Spark not deleting shuffle files 
of lost executors
                 Key: SPARK-6011
                 URL: https://issues.apache.org/jira/browse/SPARK-6011
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.2.1
         Environment: Running Spark in Yarn-Client mode
            Reporter: pankaj arora
             Fix For: 1.3.1


If Executors gets lost abruptly spark does not delete its shuffle files till 
application ends.
Ours is long running application which is serving requests received through 
REST APIs and if any of the executor gets lost shuffle files are not deleted 
and that leads to local disk going out of space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to