t; <mehdi.mezi...@ldmobile.net> wrote:
>> >> > We will be interested by the results if you give a try to Dynamic
>> >> allocation
>> >> > with mesos !
>> >> >
>> >> >
>> >> > - Mail Original -
t; >> a need for Fine grain mode after we enabled dynamic allocation
>> >> >> support
>> >> >> on the coarse grain mode.
>> >> >>
>> >> >> What's the reason you're running fine grain m
>> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane
> >> >> <mehdi.mezi...@ldmobile.net> wrote:
> >> >> > We will be interested by the results if you give a try to Dynamic
> >> >> allocation
> >> >> > with mesos !
> >> >> >
;> > We will be interested by the results if you give a try to Dynamic
>> >> allocation
>> >> > with mesos !
>> >> >
>> >> >
>> >> > - Mail Original -
>> >> > De: "Michael Gummelt"
th mesos !
> >> >
> >> >
> >> > - Mail Original -
> >> > De: "Michael Gummelt" <mgumm...@mesosphere.io>
> >> > À: "Sumit Chawla" <sumitkcha...@gmail.com>
> >> > Cc: u...@mesos.apache
il.com>
>> > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User"
>> > <u...@spark.apache.org>, dev@spark.apache.org
>> > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin /
>> > Berne / Rome / Stockholm / Vienne
>&g
, "User"
<u...@spark.apache.org>, dev@spark.apache.org
Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / Berne /
Rome / Stockholm / Vienne
Objet: Re: Mesos Spark Fine Grained Execution - CPU count
> Is this problem of idle executors sticking around solv
t;sumitkcha...@gmail.com>
> > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User"
> > <u...@spark.apache.org>, dev@spark.apache.org
> > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin /
> > Berne / Rome / Stockholm / Vienne
>
t;Sumit Chawla" <sumitkcha...@gmail.com>
> Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User"
> <u...@spark.apache.org>, dev@spark.apache.org
> Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin /
> Berne / Rome / Stockholm / Vienne
> Objet: Re: Mesos
> Is this problem of idle executors sticking around solved in Dynamic
Resource Allocation? Is there some timeout after which Idle executors can
just shutdown and cleanup its resources.
Yes, that's exactly what dynamic allocation does. But again I have no idea
what the state of dynamic
Great. Makes much better sense now. What will be reason to have
spark.mesos.mesosExecutor.cores more than 1, as this number doesn't include
the number of cores for tasks.
So in my case it seems like 30 CPUs are allocated to executors. And there
are 48 tasks so 48 + 30 = 78 CPUs. And i am
That makes sense. From the documentation it looks like the executors are
not supposed to terminate:
http://spark.apache.org/docs/latest/running-on-mesos.html#fine-grained-deprecated
> Note that while Spark tasks in fine-grained will relinquish cores as they
> terminate, they will not relinquish
> I should preassume that No of executors should be less than number of
tasks.
No. Each executor runs 0 or more tasks.
Each executor consumes 1 CPU, and each task running on that executor
consumes another CPU. You can customize this via
spark.mesos.mesosExecutor.cores (
Amsterdam / Berlin / Berne /
Rome / Stockholm / Vienne
Objet: Re: Mesos Spark Fine Grained Execution - CPU count
But coarse grained does the exact same thing which i am trying to avert here.
At the cost of lower startup, it keeps the resources reserved till the entire
duration of the job.
Hi Chawla,
One possible reason is that Mesos fine grain mode also takes up cores
to run the executor per host, so if you have 20 agents running Fine
grained executor it will take up 20 cores while it's still running.
Tim
On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit
mmelt" <mgumm...@mesosphere.io>
> Cc: u...@mesos.apache.org, "Dev" <d...@mesos.apache.org>, "User" <
> u...@spark.apache.org>, "dev" <dev@spark.apache.org>
> Envoyé: Lundi 19 Décembre 2016 19h35:51 GMT +01:00 Amsterdam / Berlin /
> Berne / Ro
But coarse grained does the exact same thing which i am trying to avert
here. At the cost of lower startup, it keeps the resources reserved till
the entire duration of the job.
Regards
Sumit Chawla
On Mon, Dec 19, 2016 at 10:06 AM, Michael Gummelt
wrote:
> Hi
>
> I
Hi
I don't have a lot of experience with the fine-grained scheduler. It's
deprecated and fairly old now. CPUs should be relinquished as tasks
complete, so I'm not sure why you're seeing what you're seeing. There have
been a few discussions on the spark list regarding deprecating the
Hi
I am using Spark 1.6. I have one query about Fine Grained model in Spark.
I have a simple Spark application which transforms A -> B. Its a single
stage application. To begin the program, It starts with 48 partitions.
When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs
19 matches
Mail list logo