I think that what you are looking for is Dynamic resource allocation: 
http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
 


Spark provides a mechanism to dynamically adjust the resources your application 
occupies based on the workload. This means that your application may give 
resources back to the cluster if they are no longer used and request them again 
later when there is demand. This feature is particularly useful if multiple 
applications share resources in your Spark cluster. 

----- Mail Original ----- 
De: "Sumit Chawla" <sumitkcha...@gmail.com> 
À: "Michael Gummelt" <mgumm...@mesosphere.io> 
Cc: u...@mesos.apache.org, "Dev" <d...@mesos.apache.org>, "User" 
<u...@spark.apache.org>, "dev" <dev@spark.apache.org> 
Envoyé: Lundi 19 Décembre 2016 19h35:51 GMT +01:00 Amsterdam / Berlin / Berne / 
Rome / Stockholm / Vienne 
Objet: Re: Mesos Spark Fine Grained Execution - CPU count 


But coarse grained does the exact same thing which i am trying to avert here. 
At the cost of lower startup, it keeps the resources reserved till the entire 
duration of the job. 



Regards 
Sumit Chawla 



On Mon, Dec 19, 2016 at 10:06 AM, Michael Gummelt < mgumm...@mesosphere.io > 
wrote: 




Hi 

I don't have a lot of experience with the fine-grained scheduler. It's 
deprecated and fairly old now. CPUs should be relinquished as tasks complete, 
so I'm not sure why you're seeing what you're seeing. There have been a few 
discussions on the spark list regarding deprecating the fine-grained scheduler, 
and no one seemed too dead-set on keeping it. I'd recommend you move over to 
coarse-grained. 





On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit < sumitkcha...@gmail.com > wrote: 



Hi 


I am using Spark 1.6. I have one query about Fine Grained model in Spark. I 
have a simple Spark application which transforms A -> B. Its a single stage 
application. To begin the program, It starts with 48 partitions. When the 
program starts running, in mesos UI it shows 48 tasks and 48 CPUs allocated to 
job. Now as the tasks get done, the number of active tasks number starts 
decreasing. How ever, the number of CPUs does not decrease propotionally. When 
the job was about to finish, there was a single remaininig task, however CPU 
count was still 20. 


My questions, is why there is no one to one mapping between tasks and cpus in 
Fine grained? How can these CPUs be released when the job is done, so that 
other jobs can start. 






Regards 
Sumit Chawla 




-- 







Michael Gummelt 
Software Engineer 
Mesosphere 

Reply via email to