I feel he wanted to ask about workers. In that case, pplease launch workers
on Node 3,4,5 (and/or Node 8,9,10 etc).
You need to go to each worker and start worker daemon with master URL:Port
(typically7077) as parameter (so workers can talk to master).
You shoud be able to see 1 masterr and N
I'm assuming by spark-client you mean the spark driver program. In that
case you can pick any machine (say Node 7), create your driver program in
it and use spark-submit to submit it to the cluster or if you create the
SparkContext within your driver program (specifying all the properties)
then
I have following hadoop spark cluster nodes configuration:
Nodes 1 2 are resourceManager and nameNode respectivly
Nodes 3, 4, and 5 each includes nodeManager dataNode
Node 7 is Spark-master configured to run yarn-client or yarn-master modes
I have tested it and it works fine.
Is there any