But coarse grained does the exact same thing which i am trying to avert
here.  At the cost of lower startup, it keeps the resources reserved till
the entire duration of the job.

Regards
Sumit Chawla


On Mon, Dec 19, 2016 at 10:06 AM, Michael Gummelt <mgumm...@mesosphere.io>
wrote:

> Hi
>
> I don't have a lot of experience with the fine-grained scheduler.  It's
> deprecated and fairly old now.  CPUs should be relinquished as tasks
> complete, so I'm not sure why you're seeing what you're seeing.  There have
> been a few discussions on the spark list regarding deprecating the
> fine-grained scheduler, and no one seemed too dead-set on keeping it.  I'd
> recommend you move over to coarse-grained.
>
> On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit <sumitkcha...@gmail.com>
> wrote:
>
>> Hi
>>
>> I am using Spark 1.6. I have one query about Fine Grained model in
>> Spark.  I have a simple Spark application which transforms A -> B.  Its a
>> single stage application.  To begin the program, It starts with 48
>> partitions.  When the program starts running, in mesos UI it shows 48 tasks
>> and 48 CPUs allocated to job.  Now as the tasks get done, the number of
>> active tasks number starts decreasing.  How ever, the number of CPUs does
>> not decrease propotionally.  When the job was about to finish, there was a
>> single remaininig task, however CPU count was still 20.
>>
>> My questions, is why there is no one to one mapping between tasks and
>> cpus in Fine grained?  How can these CPUs be released when the job is done,
>> so that other jobs can start.
>>
>>
>> Regards
>> Sumit Chawla
>>
>>
>
>
> --
> Michael Gummelt
> Software Engineer
> Mesosphere
>

Reply via email to