Hi, Alvaro
You can create different clusters using standalone cluster manager, and
than manage subset of machines through submitting application on different
masters. Or you can use Mesos attributes to mark subset of workers and
specify it in spark.mesos.constraints


On Tue, Feb 7, 2017 at 1:21 PM Alvaro Brandon <alvarobran...@gmail.com>
wrote:

> Hello all:
>
> I have the following scenario.
> - I have a cluster of 50 machines with Hadoop and Spark installed on them.
> - I want to launch one Spark application through spark submit. However I
> want this application to run on only a subset of these machines,
> disregarding data locality. (e.g. 10 machines)
>
> Is this possible?. Is there any option in the standalone scheduler, YARN
> or Mesos that allows such thing?.
>
>
>

Reply via email to