I don't think it has anything to do with using all the cores, since 1
executor can run as many tasks as you like. Yes, you'd want them to
request all cores in this case. YARN vs Mesos does not matter here.

On Wed, Dec 16, 2015 at 1:58 PM, Michael Segel
<michael_se...@hotmail.com> wrote:
> Hmmm.
> This would go against the grain.
>
> I have to ask how you came to that conclusion…
>
> There are a lot of factors…  e.g. Yarn vs Mesos?
>
> What you’re suggesting would mean a loss of parallelism.
>
>
>> On Dec 16, 2015, at 12:22 AM, Sean Owen <so...@cloudera.com> wrote:
>>
>> 1 per machine is the right number. If you are running very large heaps
>> (>64GB) you may consider multiple per machine just to make sure each's
>> GC pauses aren't excessive, but even this might be better mitigated
>> with GC tuning.
>>
>> On Tue, Dec 15, 2015 at 9:07 PM, Veljko Skarich
>> <veljko.skar...@gmail.com> wrote:
>>> Hi,
>>>
>>> I'm looking for suggestions on the ideal number of executors per machine. I
>>> run my jobs on 64G 32 core machines, and at the moment I have one executor
>>> running per machine, on the spark standalone cluster.
>>>
>>> I could not find many guidelines for figuring out the ideal number of
>>> executors; the Spark official documentation merely recommends not having
>>> more than 64G per executor to avoid GC issues. Anyone have and advice on
>>> this?
>>>
>>> thank you.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to