Hi all,

I have Spark cluster setup on YARN with 4 nodes(1 master and 3 slaves). When
I run an application, YARN chooses, at random, one Application Master from
among the slaves. This means that my final computation is  being carried
only on two slaves. This decreases the performance of the cluster. 

1. Is this the correct way of configuration? What is the architecture of
Spark on YARN?
2. Is there a way in which I can run Spark master, YARN application master
and resource manager on a single node?(so that I can use three other nodes
for the computation)

Thanks
Harika





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-architecture-tp21986.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to