Is this a new issue?
What version of Spark?
What version of Mesos/libmesos?
Can you run the job with debug logging turned on and attach the output?
Do you see the corresponding message in the mesos master that indicates it
received the teardown?

On Tue, Aug 9, 2016 at 1:28 AM, Todd Leo <todd.f....@gmail.com> wrote:

> Hi,
>
> I’m running Spark jobs on Mesos. When the job finishes, *SparkContext* is
> manually closed by sc.stop(). Then Mesos log shows:
>
> I0809 15:48:34.132014 11020 sched.cpp:1589] Asked to stop the driver
> I0809 15:48:34.132181 11277 sched.cpp:831] Stopping framework 
> '20160808-170425-2365980426-5050-4372-0034'
>
> However, the process doesn’t quit after all. This is critical, because I’d
> like to use SparkLauncher to submit such jobs. If my job doesn’t end, jobs
> will pile up and fill up the memory. Pls help. :-|
>
> —
> BR,
> Todd Leo
> ​
>



-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to