Hello all:

I have the following scenario.
- I have a cluster of 50 machines with Hadoop and Spark installed on them.
- I want to launch one Spark application through spark submit. However I
want this application to run on only a subset of these machines,
disregarding data locality. (e.g. 10 machines)

Is this possible?. Is there any option in the standalone scheduler, YARN or
Mesos that allows such thing?.

Reply via email to