One of our clusters runs on AWS with a portion of the nodes being spot nodes.
We would like to force the application master not to run on spot nodes. For
what ever reason, application master is not able to recover in cases the node
where it was running suddenly disappears, which is the case
There is no such configuration parameter for selecting which nodes the
application master is running on.
Cheers
On Mon, Nov 16, 2015 at 12:52 PM, Alex Rovner
wrote:
> I was wondering if there is analogues configuration parameter to
>
I was wondering if there is analogues configuration parameter to
"spark.yarn.executor.nodeLabelExpression"
which restricts which nodes the application master is running on.
One of our clusters runs on AWS with a portion of the nodes being spot
nodes. We would like to force the application master
Wangda, YARN committer, told me that support for selecting which nodes the
application master is running on is integrated to the upcoming hadoop 2.8.0
release.
Stay tuned.
On Mon, Nov 16, 2015 at 1:36 PM, Ted Yu wrote:
> There is no such configuration parameter for
Node label for AM is not yet supported for Spark now, currently only
executor is supported.
On Tue, Nov 17, 2015 at 7:57 AM, Ted Yu wrote:
> Wangda, YARN committer, told me that support for selecting which nodes the
> application master is running on is integrated to the