Can't find anything related to this from the Configurations page
http://spark.apache.org/docs/1.2.0/configuration.html, You could probably
open a JIRA issue regarding this.

Thanks
Best Regards

On Tue, Mar 3, 2015 at 12:03 PM, lisendong <lisend...@163.com> wrote:

> I 'm using spark als.
>
> I set the iteration number to 30.
>
> And in each iteration, tasks will produce nearly 1TB shuffle write.
>
> To my surprise, this shuffle data will not be cleaned until the total job
> finished, which means, I need 30TB disk to store the shuffle data.
>
>
> I think after each iteration, we can delete the shuffle data before current
> iteration, right?
>
> how to do this?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-clean-shuffle-write-each-iteration-tp21886.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to