Hi Veljko,

I usually ask the following questions: “how many memory per task?” then "How 
many cpu per task?” then I calculate based on the memory and cpu requirements 
per task. You might be surprise (maybe not you, but at least I am :) ) that 
many OOM issues are actually because of this. 

Best Regards,

Jerry

> On Dec 15, 2015, at 5:18 PM, Jakob Odersky <joder...@gmail.com> wrote:
> 
> Hi Veljko,
> I would assume keeping the number of executors per machine to a minimum is 
> best for performance (as long as you consider memory requirements as well).
> Each executor is a process that can run tasks in multiple threads. On a 
> kernel/hardware level, thread switches are much cheaper than process switches 
> and therefore having a single executor with multiple threads gives a better 
> over-all performance that multiple executors with less threads.
> 
> --Jakob
> 
> On 15 December 2015 at 13:07, Veljko Skarich <veljko.skar...@gmail.com 
> <mailto:veljko.skar...@gmail.com>> wrote:
> Hi, 
> 
> I'm looking for suggestions on the ideal number of executors per machine. I 
> run my jobs on 64G 32 core machines, and at the moment I have one executor 
> running per machine, on the spark standalone cluster.
> 
>  I could not find many guidelines for figuring out the ideal number of 
> executors; the Spark official documentation merely recommends not having more 
> than 64G per executor to avoid GC issues. Anyone have and advice on this?
> 
> thank you. 
> 

Reply via email to