https://issues.apache.org/jira/browse/SPARK-10975

On Wed, Oct 7, 2015 at 11:36 AM, Iulian Dragoș <iulian.dra...@typesafe.com>
wrote:

> It is indeed a bug. I believe the shutdown procedure in #7820 only kicks
> in when the external shuffle service is enabled (a pre-requisite of dynamic
> allocation). As a workaround you can use dynamic allocation (you can set
> spark.dynamicAllocation.maxExecutors and
> spark.dynamicAllocation.minExecutors to the same value.
>
> I'll file a Jira ticket.
>
> On Wed, Oct 7, 2015 at 10:14 AM, Alexei Bakanov <russ...@gmail.com> wrote:
>
>> Hi
>>
>> I'm running Spark 1.5.1 on Mesos in coarse-grained mode. No dynamic
>> allocation or shuffle service. I see that there are two types of temporary
>> files under /tmp folder associated with every executor: /tmp/spark-<UUID>
>> and /tmp/blockmgr-<UUID>. When job is finished /tmp/spark-<UUID> is gone,
>> but blockmgr directory is left with all gigabytes in it. In Spark 1.4.1
>> blockmgr-<UUID> folder was under /tmp/spark-<UUID>, so when /tmp/spark
>> folder was removed blockmgr was gone too.
>> Is it a bug in 1.5.1?
>>
>> By the way, in fine-grain mode /tmp/spark-<UUID> folder does not get
>> removed in neither 1.4.1 nor 1.5.1 for some reason.
>>
>> Thanks,
>> Alexei
>>
>
>
>
> --
>
> --
> Iulian Dragos
>
> ------
> Reactive Apps on the JVM
> www.typesafe.com
>
>


-- 

--
Iulian Dragos

------
Reactive Apps on the JVM
www.typesafe.com

Reply via email to