If with mesos, how do we control the number of executors? In our cluster,
each node only has one executor with very big JVM. Sometimes, if the
executor dies, all the concurrent running tasks will be gone. We would like
to have multiple executors in one node but can not figure out a way to do
it in Yarn.

On Wednesday, May 27, 2015, Saisai Shao <sai.sai.s...@gmail.com> wrote:

> The drive has a heuristic mechanism to decide the number of executors in
> the run-time according the pending tasks. You could enable with
> configuration, you could refer to spark document to find the details.
>
> 2015-05-27 15:00 GMT+08:00 canan chen <ccn...@gmail.com
> <javascript:_e(%7B%7D,'cvml','ccn...@gmail.com');>>:
>
>> How does the dynamic allocation works ? I mean does it related
>> with parallelism of my RDD and how does driver know how many executor it
>> needs ?
>>
>> On Wed, May 27, 2015 at 2:49 PM, Saisai Shao <sai.sai.s...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','sai.sai.s...@gmail.com');>> wrote:
>>
>>> It depends on how you use Spark, if you use Spark with Yarn and enable
>>> dynamic allocation, the number of executor is not fixed, will change
>>> dynamically according to the load.
>>>
>>> Thanks
>>> Jerry
>>>
>>> 2015-05-27 14:44 GMT+08:00 canan chen <ccn...@gmail.com
>>> <javascript:_e(%7B%7D,'cvml','ccn...@gmail.com');>>:
>>>
>>>> It seems the executor number is fixed for the standalone mode, not sure
>>>> other modes.
>>>>
>>>
>>>
>>
>

-- 
Sent from my iPhone

Reply via email to