You shouldn’t have any issues with differing nodes on the latest Ambari and 
Hortonworks. It works fine for mixed hardware and spark on yarn. 

Simon

> On Jan 26, 2015, at 4:34 PM, Michael Segel <msegel_had...@hotmail.com> wrote:
> 
> If you’re running YARN, then you should be able to mix and max where YARN is 
> managing the resources available on the node. 
> 
> Having said that… it depends on which version of Hadoop/YARN. 
> 
> If you’re running Hortonworks and Ambari, then setting up multiple profiles 
> may not be straight forward. (I haven’t seen the latest version of Ambari) 
> 
> So in theory, one profile would be for your smaller 36GB of ram, then one 
> profile for your 128GB sized machines. 
> Then as your request resources for your spark job, it should schedule the 
> jobs based on the cluster’s available resources. 
> (At least in theory.  I haven’t tried this so YMMV) 
> 
> HTH
> 
> -Mike
> 
> On Jan 26, 2015, at 4:25 PM, Antony Mayi <antonym...@yahoo.com.INVALID 
> <mailto:antonym...@yahoo.com.INVALID>> wrote:
> 
>> should have said I am running as yarn-client. all I can see is specifying 
>> the generic executor memory that is then to be used in all containers.
>> 
>> 
>> On Monday, 26 January 2015, 16:48, Charles Feduke <charles.fed...@gmail.com 
>> <mailto:charles.fed...@gmail.com>> wrote:
>> 
>> 
>> You should look at using Mesos. This should abstract away the individual 
>> hosts into a pool of resources and make the different physical 
>> specifications manageable.
>> 
>> I haven't tried configuring Spark Standalone mode to have different specs on 
>> different machines but based on spark-env.sh.template:
>> 
>> # - SPARK_WORKER_CORES, to set the number of cores to use on this machine
>> # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give 
>> executors (e.g. 1000m, 2g)
>> # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. 
>> "-Dx=y")
>> it looks like you should be able to mix. (Its not clear to me whether 
>> SPARK_WORKER_MEMORY is uniform across the cluster or for the machine where 
>> the config file resides.)
>> 
>> On Mon Jan 26 2015 at 8:07:51 AM Antony Mayi <antonym...@yahoo.com.invalid 
>> <mailto:antonym...@yahoo.com.invalid>> wrote:
>> Hi,
>> 
>> is it possible to mix hosts with (significantly) different specs within a 
>> cluster (without wasting the extra resources)? for example having 10 nodes 
>> with 36GB RAM/10CPUs now trying to add 3 hosts with 128GB/10CPUs - is there 
>> a way to utilize the extra memory by spark executors (as my understanding is 
>> all spark executors must have same memory).
>> 
>> thanks,
>> Antony.
>> 
>> 
> 

Reply via email to