[ 
https://issues.apache.org/jira/browse/SPARK-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15207459#comment-15207459
 ] 

holdenk commented on SPARK-6717:
--------------------------------

So looking at the code a little bit I think its probably better to need to live 
in ALS rather than core.

I don't think we can solve this for all checkpointing in general - when we 
checkpoint, if we are checkpointing a ShuffledRDD directly, its easy to 
register our shuffle files for cleanup but in the more general case (like the 
one in ALS) where we are checkpointing a subsequent RDD we don't know if its 
safe to cleanup the parents shuffle files (in general).

We could expose something `checkPointAndEagerlyCleanParents` in the Core API 
but I think the chance of misuse is pretty high and it might be better to 
implement this inside of ML/ALS until there is a second request for this.

> Clear shuffle files after checkpointing in ALS
> ----------------------------------------------
>
>                 Key: SPARK-6717
>                 URL: https://issues.apache.org/jira/browse/SPARK-6717
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 1.4.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>              Labels: als
>
> In ALS iterations, we checkpoint RDDs to cut lineage and to reduce shuffle 
> files. However, whether to clean shuffle files depends on the system GC, 
> which may not be triggered in ALS iterations. So after checkpointing, before 
> we let the RDD object go out of scope, we should clean its shuffle 
> dependencies explicitly. This function could either stay inside ALS or go to 
> Core.
> Without this feature, we can call System.gc() periodically to clean shuffle 
> files of RDDs that went out of scope.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to