Node label for AM is not yet supported for Spark now, currently only
executor is supported.

On Tue, Nov 17, 2015 at 7:57 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> Wangda, YARN committer, told me that support for selecting which nodes the
> application master is running on is integrated to the upcoming hadoop 2.8.0
> release.
>
> Stay tuned.
>
> On Mon, Nov 16, 2015 at 1:36 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> There is no such configuration parameter for selecting which nodes the
>> application master is running on.
>>
>> Cheers
>>
>> On Mon, Nov 16, 2015 at 12:52 PM, Alex Rovner <alex.rov...@magnetic.com>
>> wrote:
>>
>>> I was wondering if there is analogues configuration parameter to 
>>> "spark.yarn.executor.nodeLabelExpression"
>>> which restricts which nodes the application master is running on.
>>>
>>> One of our clusters runs on AWS with a portion of the nodes being spot
>>> nodes. We would like to force the application master not to run on spot
>>> nodes. For what ever reason, application master is not able to recover in
>>> cases the node where it was running suddenly disappears, which is the case
>>> with spot nodes.
>>>
>>> Any guidance on this topic is appreciated.
>>>
>>> *Alex Rovner*
>>> *Director, Data Engineering *
>>> *o:* 646.759.0052
>>>
>>> * <http://www.magnetic.com/>*
>>>
>>
>>
>

Reply via email to