Correct. Its just that with coarse mode we grab the resources up front, so its 
either available or not. But using resources on demand, as with a fine grained 
mode, just means the potential to starve out an individual job. There is also 
the sharing of RDDs that coarse gives you which would need something like 
Tachyon to achieve in fine grain mode.


From: Timothy Chen <tnac...@gmail.com<mailto:tnac...@gmail.com>>
Date: Wednesday, November 4, 2015 at 11:05 AM
To: "Heller, Chris" <chel...@akamai.com<mailto:chel...@akamai.com>>
Cc: Reynold Xin <r...@databricks.com<mailto:r...@databricks.com>>, 
"dev@spark.apache.org<mailto:dev@spark.apache.org>" 
<dev@spark.apache.org<mailto:dev@spark.apache.org>>
Subject: Re: Please reply if you use Mesos fine grained mode

Hi Chris,

How does coarse grain mode gives you less starvation in your overloaded 
cluster? Is it just because it allocates all resources at once (which I think 
in a overloaded cluster allows less things to run at once).

Tim


On Nov 4, 2015, at 4:21 AM, Heller, Chris 
<chel...@akamai.com<mailto:chel...@akamai.com>> wrote:

We’ve been making use of both. Fine-grain mode makes sense for more ad-hoc work 
loads, and coarse-grained for more job like loads on a common data set. My 
preference is the fine-grain mode in all cases, but the overhead associated 
with its startup and the possibility that an overloaded cluster would be 
starved for resources makes coarse grain mode a reality at the moment.

On Wednesday, 4 November 2015 5:24 AM, Reynold Xin 
<r...@databricks.com<mailto:r...@databricks.com>> wrote:


If you are using Spark with Mesos fine grained mode, can you please respond to 
this email explaining why you use it over the coarse grained mode?

Thanks.



Reply via email to