Re: yarn does not accept job in cluster mode
Can you try running the spark-shell in yarn-cluster mode? ./bin/spark-shell --master yarn-client Read more over here http://spark.apache.org/docs/1.0.0/running-on-yarn.html Thanks Best Regards On Sun, Sep 28, 2014 at 7:08 AM, jamborta jambo...@gmail.com wrote: hi all, I have a job that works ok in yarn-client mode,but when I try in yarn-cluster mode it returns the following: WARN YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory the cluster has plenty of memory and resources. I am running this from python using this context: conf = (SparkConf() .setMaster(yarn-cluster) .setAppName(spark_tornado_server) .set(spark.executor.memory, 1024m) .set(spark.cores.max, 16) .set(spark.driver.memory, 1024m) .set(spark.executor.instances, 2) .set(spark.executor.cores, 8) .set(spark.eventLog.enabled, False) HADOOP_HOME and HADOOP_CONF_DIR are also set in spark-env. thanks, not sure if I am missing some config -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/yarn-does-not-accept-job-in-cluster-mode-tp15281.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Re: yarn does not accept job in cluster mode
thanks for the reply. As I mentioned above, all works in yarn-client mode, the problem starts when I try to run it in yarn-cluster mode. (seems that spark-shell does not work in yarn-cluster mode, so cannot debug that way). On Mon, Sep 29, 2014 at 7:30 AM, Akhil Das ak...@sigmoidanalytics.com wrote: Can you try running the spark-shell in yarn-cluster mode? ./bin/spark-shell --master yarn-client Read more over here http://spark.apache.org/docs/1.0.0/running-on-yarn.html Thanks Best Regards On Sun, Sep 28, 2014 at 7:08 AM, jamborta jambo...@gmail.com wrote: hi all, I have a job that works ok in yarn-client mode,but when I try in yarn-cluster mode it returns the following: WARN YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory the cluster has plenty of memory and resources. I am running this from python using this context: conf = (SparkConf() .setMaster(yarn-cluster) .setAppName(spark_tornado_server) .set(spark.executor.memory, 1024m) .set(spark.cores.max, 16) .set(spark.driver.memory, 1024m) .set(spark.executor.instances, 2) .set(spark.executor.cores, 8) .set(spark.eventLog.enabled, False) HADOOP_HOME and HADOOP_CONF_DIR are also set in spark-env. thanks, not sure if I am missing some config -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/yarn-does-not-accept-job-in-cluster-mode-tp15281.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
yarn does not accept job in cluster mode
hi all, I have a job that works ok in yarn-client mode,but when I try in yarn-cluster mode it returns the following: WARN YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory the cluster has plenty of memory and resources. I am running this from python using this context: conf = (SparkConf() .setMaster(yarn-cluster) .setAppName(spark_tornado_server) .set(spark.executor.memory, 1024m) .set(spark.cores.max, 16) .set(spark.driver.memory, 1024m) .set(spark.executor.instances, 2) .set(spark.executor.cores, 8) .set(spark.eventLog.enabled, False) HADOOP_HOME and HADOOP_CONF_DIR are also set in spark-env. thanks, not sure if I am missing some config -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/yarn-does-not-accept-job-in-cluster-mode-tp15281.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org