Hi all,
The spark job will run on yarn. While I do not set SPARK_LOCAL_IP any, or just 
set asexport  SPARK_LOCAL_IP=localhost    #or set as the specific node ip on 
the specific spark install directory 

It will work well to submit spark job on master node of cluster, however, it 
will fail by way of some gateway machine remotely.
The gateway machine is already configed, it works well to submit hadoop job.It 
is set as:
export SCALA_HOME=/usr/lib/scala
export JAVA_HOME=/usr/java/jdk1.7.0_45
export R_HOME=/usr/lib/r
export HADOOP_HOME=/usr/lib/hadoop
export YARN_CONF_DIR=/usr/lib/hadoop/etc/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export SPARK_MASTER_IP=master01
#export SPARK_LOCAL_IP=master01  #if no SPARK_LOCAL_IP is set, SparkContext 
will not start
export SPARK_LOCAL_IP=localhost     #if localhost is set, SparkContext is 
started, but failed later
export SPARK_LOCAL_DIRS=/data/spark_local_dir
...

The error messages:
15/09/25 19:07:12 INFO util.Utils: Successfully started service 'sparkYarnAM' 
on port 48133.
15/09/25 19:07:12 INFO yarn.ApplicationMaster: Waiting for Spark driver to be 
reachable.
15/09/25 19:07:12 ERROR yarn.ApplicationMaster: Failed to connect to driver at 
127.0.0.1:35706, retrying ...
15/09/25 19:07:12 ERROR yarn.ApplicationMaster: Failed to connect to driver at 
127.0.0.1:35706, retrying ...
15/09/25 19:07:12 ERROR yarn.ApplicationMaster: Failed to connect to driver at 
127.0.0.1:35706, retrying ...

 I shall sincerely appreciate your kind help very much!Zhiliang

Reply via email to