Hi,

I have few questions about Spark Master and Slave setup:

Here, I have 5 Hadoop nodes (n1, n2, n3, n4, and n5 respectively), at the 
moment I run Spark under these nodes:
        n1:    Hadoop Active Name node,                 Hadoop Slave            
Spark Active Master                             
        n2:    Hadoop Standby Name Node,        Hadoop Salve                    
                                Spark Slave
        n3:                                                     Hadoop Salve    
                                                Spark Slave 
        n4:                                                     Hadoop Salve    
                                                Spark Slave 
        n5:                                                     Hadoop Salve    
                                                Spark Slave 

Questions:
Q1: If I set n1 as both Spark Master and Spark Slave, I cannot start the Spark 
Cluster. does it mean that, unlike Hadoop, I cannot use the same machine to be 
both MASTER and SLAVE in Spark?
        n1:    Hadoop Active Name node,                 Hadoop Slave            
Spark Active Master     Spark Slave     (failed to Start Spark)
        n2:    Hadoop Standby Name Node,        Hadoop Salve                    
                                Spark Slave
        n3:                                                     Hadoop Salve    
                                                Spark Slave 
        n4:                                                             Hadoop 
Salve                                                    Spark Slave 
        n5:                                                     Hadoop Salve    
                                                Spark Slave 

Q2: I am planning Spark HA, what if I use n2 as "Spark Standby Master and Spark 
Slaveā€? is Spark allowed to run Standby Master and Slave under same machine?
        n1:    Hadoop Active Name node,                 Hadoop Slave            
Spark Active Master     
        n2:    Hadoop Standby Name Node,        Hadoop Salve            Spark 
Standby Master    Spark Slave 
        n3:                                                     Hadoop Salve    
                                                Spark Slave 
        n4:                                                     Hadoop Salve    
                                                Spark Slave 
        n5:                                                      Hadoop Salve   
                                                Spark Slave 

Q3: Does the Spark Master node do actual computation work like a worker or just 
a pure monitoring node? 

Regards
Arthur

Reply via email to