You should look at using Mesos. This should abstract away the individual
hosts into a pool of resources and make the different physical
specifications manageable.

I haven't tried configuring Spark Standalone mode to have different specs
on different machines but based on spark-env.sh.template:

# - SPARK_WORKER_CORES, to set the number of cores to use on this machine

# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give
executors (e.g. 1000m, 2g)

# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g.
"-Dx=y")

it looks like you should be able to mix. (Its not clear to me whether
SPARK_WORKER_MEMORY is uniform across the cluster or for the machine where
the config file resides.)

On Mon Jan 26 2015 at 8:07:51 AM Antony Mayi <antonym...@yahoo.com.invalid>
wrote:

> Hi,
>
> is it possible to mix hosts with (significantly) different specs within a
> cluster (without wasting the extra resources)? for example having 10 nodes
> with 36GB RAM/10CPUs now trying to add 3 hosts with 128GB/10CPUs - is there
> a way to utilize the extra memory by spark executors (as my understanding
> is all spark executors must have same memory).
>
> thanks,
> Antony.
>

Reply via email to