Hi Dhiraj , 

Thanks for the clarification , 
Yes i indeed checked that Both YARN related (Nodemanager & ResourceManager)
daemons are running in their respective nodes and i can access HDFS
directory structure from each node.

I am using Hadoop version 2.7.2 and i have downloaded Pre-build version for
Spark which supported for hadoop 2.6 and later (The latest available
version).

Well i have already confirmed that HADOOP_CONF_DIR are  pointing to the
correct hadoop /etc/hadoop/<config file's> location.

Can you suggest me if any settings has to be done in spark-defaults.conf
file ?
Also i am trying to understand on the arguments which has to be passed along
with yarn-client command like --executor-memory and --driver-memory. Can you
suggest a possible values for those arguments based on my VM Specs as
mentioned above ?





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26717.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to