[ 
https://issues.apache.org/jira/browse/SPARK-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-6011:
-----------------------------
         Component/s:     (was: Spark Core)
                      YARN
    Target Version/s:   (was: 1.3.1)
       Fix Version/s:     (was: 1.3.1)

So from the PR discussion, I don't believe the proposed change can proceed. I 
am not sure it gets at the underlying issue here either, which is a concern 
that has been raised, for example, in 
https://issues.apache.org/jira/browse/SPARK-5836 . I do think it's worth 
tracking a) at least documenting this, and Marcelo's suggestion that maybe the 
block manager can later do more proactive cleanup.

Have you used {{spark.cleaner.ttl}} by the way? I'm not sure if it helps in 
this case.

I'd like to focus the discussion in one place so would prefer to resolve this 
as a duplicate and merge into SPARK-5836

> Out of disk space due to Spark not deleting shuffle files of lost executors
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-6011
>                 URL: https://issues.apache.org/jira/browse/SPARK-6011
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.2.1
>         Environment: Running Spark in Yarn-Client mode
>            Reporter: pankaj arora
>
> If Executors gets lost abruptly spark does not delete its shuffle files till 
> application ends.
> Ours is long running application which is serving requests received through 
> REST APIs and if any of the executor gets lost shuffle files are not deleted 
> and that leads to local disk going out of space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to