Hi Jun,

Spark currently doesn't have that feature, i.e. it aims for a fixed number
of executors per application regardless of resource usage, but it's
definitely worth considering.  We could start more executors when we have a
large backlog of tasks and shut some down when we're underutilized.

The fine-grained task scheduling is blocked on work from YARN that will
allow changing the CPU allocation of a YARN container dynamically.  The
relevant JIRA for this dependency is YARN-1197, though YARN-1488 might
serve this purpose as well if it comes first.

-Sandy


On Thu, Aug 7, 2014 at 10:56 PM, Jun Feng Liu <[email protected]> wrote:

> Thanks for echo on this. Possible to adjust resource based on container
> numbers? e.g to allocate more container when driver need more resources and
> return some resource by delete some container when parts of container
> already have enough cores/memory
>
> Best Regards
>
>
> *Jun Feng Liu*
>
> IBM China Systems & Technology Laboratory in Beijing
>
>   ------------------------------
>  [image: 2D barcode - encoded with contact information]
> *Phone: *86-10-82452683
> * E-mail:* *[email protected]* <[email protected]>
> [image: IBM]
>
> BLD 28,ZGC Software Park
> No.8 Rd.Dong Bei Wang West, Dist.Haidian Beijing 100193
> China
>
>
>
>
>
>  *Patrick Wendell <[email protected] <[email protected]>>*
>
> 2014/08/08 13:10
>   To
> Jun Feng Liu/China/IBM@IBMCN,
> cc
> "[email protected]" <[email protected]>
> Subject
> Re: Fine-Grained Scheduler on Yarn
>
>
>
>
> Hey sorry about that - what I said was the opposite of what is true.
>
> The current YARN mode is equivalent to "coarse grained" mesos. There is no
> fine-grained scheduling on YARN at the moment. I'm not sure YARN supports
> scheduling in units other than containers. Fine-grained scheduling requires
> scheduling at the granularity of individual cores.
>
>
> On Thu, Aug 7, 2014 at 9:43 PM, Patrick Wendell <*[email protected]*
> <[email protected]>> wrote:
> The current YARN is equivalent to what is called "fine grained" mode in
> Mesos. The scheduling of tasks happens totally inside of the Spark driver.
>
>
> On Thu, Aug 7, 2014 at 7:50 PM, Jun Feng Liu <*[email protected]*
> <[email protected]>> wrote:
> Any one know the answer?
> Best Regards
>
>
> * Jun Feng Liu*
>
> IBM China Systems & Technology Laboratory in Beijing
>
>   ------------------------------
>  *Phone: *86-10-82452683
> * E-mail:* *[email protected]* <[email protected]>
>
>
> BLD 28,ZGC Software Park
> No.8 Rd.Dong Bei Wang West, Dist.Haidian Beijing 100193
> China
>
>
>
>
>   *Jun Feng Liu/China/IBM*
>
> 2014/08/07 15:37
>
>   To
> *[email protected]* <[email protected]>,
> cc
>   Subject
> Fine-Grained Scheduler on Yarn
>
>
>
>
>
> Hi, there
>
> Just aware right now Spark only support fine grained scheduler on Mesos
> with MesosSchedulerBackend. The Yarn schedule sounds like only works on
> coarse-grained model. Is there any plan to implement fine-grained scheduler
> for YARN? Or there is any technical issue block us to do that.
>
> Best Regards
>
>
> * Jun Feng Liu*
>
> IBM China Systems & Technology Laboratory in Beijing
>
>   ------------------------------
>  *Phone: *86-10-82452683
> * E-mail:* *[email protected]* <[email protected]>
>
>
> BLD 28,ZGC Software Park
> No.8 Rd.Dong Bei Wang West, Dist.Haidian Beijing 100193
> China
>
>
>
>
>
>
>

Reply via email to