[jira] [Comment Edited] (SPARK-9485) Failed to connect to yarn / spark-submit --master yarn-client

2015-07-30 Thread Philip Adetiloye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648233#comment-14648233
 ] 

Philip Adetiloye edited comment on SPARK-9485 at 7/30/15 8:16 PM:
--

[~srowen] Thanks for the quick reply. It actually consistent (everytime) and 
here is the details of my configuration.

conf/spark-env.sh basically has this settings:

#!/usr/bin/env bash
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_YARN_QUEUE=dev

and my conf/slaves
10.0.0.204
10.0.0.205

~/.profile contains my settings here:

`
export JAVA_HOME=$(readlink -f  /usr/share/jdk1.8.0_45/bin/java | sed 
s:bin/java::)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_YARN_HOME=$HADOOP_INSTALL
export HADOOP_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export YARN_CONF_DIR=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
export HADOOP_OPTS=$HADOOP_OPTS 
-Djava.library.path=/usr/local/hadoop/lib/native

export PATH=$PATH:/usr/local/spark/sbin
export PATH=$PATH:/usr/local/spark/bin
export 
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:/usr/local/hadoop/lib/native/

export SCALA_HOME=/usr/local/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH

`
Hope this helps.

Thanks,
- Phil


was (Author: pkadetiloye):
[~srowen] Thanks for the quick reply. It actually consistent (everytime) and 
here is the details of my configuration.

conf/spark-env.sh basically has this settings:

#!/usr/bin/env bash
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_YARN_QUEUE=dev

and my conf/slaves
10.0.0.204
10.0.0.205

~/.profile contains my settings here:

export JAVA_HOME=$(readlink -f  /usr/share/jdk1.8.0_45/bin/java | sed 
s:bin/java::)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_YARN_HOME=$HADOOP_INSTALL
export HADOOP_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export YARN_CONF_DIR=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
export HADOOP_OPTS=$HADOOP_OPTS 
-Djava.library.path=/usr/local/hadoop/lib/native

export PATH=$PATH:/usr/local/spark/sbin
export PATH=$PATH:/usr/local/spark/bin
export 
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:/usr/local/hadoop/lib/native/

export SCALA_HOME=/usr/local/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH


Hope this helps.

Thanks,
- Phil

 Failed to connect to yarn / spark-submit --master yarn-client
 -

 Key: SPARK-9485
 URL: https://issues.apache.org/jira/browse/SPARK-9485
 Project: Spark
  Issue Type: Bug
  Components: Spark Shell, Spark Submit, YARN
Affects Versions: 1.4.1
 Environment: DEV
Reporter: Philip Adetiloye
Priority: Minor

 Spark-submit throws an exception when connecting to yarn but it works when  
 used in standalone mode.
 I'm using spark-1.4.1-bin-hadoop2.6 and also tried compiling from source but 
 got the same exception below.
 spark-submit --master yarn-client
 Here is a stack trace of the exception:
 15/07/29 17:32:15 INFO scheduler.DAGScheduler: Stopping DAGScheduler
 15/07/29 17:32:15 INFO cluster.YarnClientSchedulerBackend: Shutting down all 
 executors
 Exception in thread Yarn application state monitor 
 org.apache.spark.SparkException: Error asking standalone schedule
 r to shut down executors
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBacken
 d.scala:261)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:2
 66)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
 at 
 org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
 at 
 org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
 at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:
 139)
 Caused by: java.lang.InterruptedException
 at 
 

[jira] [Comment Edited] (SPARK-9485) Failed to connect to yarn / spark-submit --master yarn-client

2015-07-30 Thread Philip Adetiloye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648233#comment-14648233
 ] 

Philip Adetiloye edited comment on SPARK-9485 at 7/30/15 8:16 PM:
--

[~srowen] Thanks for the quick reply. It actually consistent (everytime) and 
here is the details of my configuration.

conf/spark-env.sh basically has this settings:

#!/usr/bin/env bash
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_YARN_QUEUE=dev

and my conf/slaves
10.0.0.204
10.0.0.205

~/.profile contains my settings here:


export JAVA_HOME=$(readlink -f  /usr/share/jdk1.8.0_45/bin/java | sed 
s:bin/java::)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_YARN_HOME=$HADOOP_INSTALL
export HADOOP_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export YARN_CONF_DIR=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
export HADOOP_OPTS=$HADOOP_OPTS 
-Djava.library.path=/usr/local/hadoop/lib/native

export PATH=$PATH:/usr/local/spark/sbin
export PATH=$PATH:/usr/local/spark/bin
export 
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:/usr/local/hadoop/lib/native/

export SCALA_HOME=/usr/local/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH


Hope this helps.

Thanks,
- Phil


was (Author: pkadetiloye):
[~srowen] Thanks for the quick reply. It actually consistent (everytime) and 
here is the details of my configuration.

conf/spark-env.sh basically has this settings:

#!/usr/bin/env bash
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_YARN_QUEUE=dev

and my conf/slaves
10.0.0.204
10.0.0.205

~/.profile contains my settings here:

`
export JAVA_HOME=$(readlink -f  /usr/share/jdk1.8.0_45/bin/java | sed 
s:bin/java::)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_YARN_HOME=$HADOOP_INSTALL
export HADOOP_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export YARN_CONF_DIR=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
export HADOOP_OPTS=$HADOOP_OPTS 
-Djava.library.path=/usr/local/hadoop/lib/native

export PATH=$PATH:/usr/local/spark/sbin
export PATH=$PATH:/usr/local/spark/bin
export 
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:/usr/local/hadoop/lib/native/

export SCALA_HOME=/usr/local/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH

`
Hope this helps.

Thanks,
- Phil

 Failed to connect to yarn / spark-submit --master yarn-client
 -

 Key: SPARK-9485
 URL: https://issues.apache.org/jira/browse/SPARK-9485
 Project: Spark
  Issue Type: Bug
  Components: Spark Shell, Spark Submit, YARN
Affects Versions: 1.4.1
 Environment: DEV
Reporter: Philip Adetiloye
Priority: Minor

 Spark-submit throws an exception when connecting to yarn but it works when  
 used in standalone mode.
 I'm using spark-1.4.1-bin-hadoop2.6 and also tried compiling from source but 
 got the same exception below.
 spark-submit --master yarn-client
 Here is a stack trace of the exception:
 15/07/29 17:32:15 INFO scheduler.DAGScheduler: Stopping DAGScheduler
 15/07/29 17:32:15 INFO cluster.YarnClientSchedulerBackend: Shutting down all 
 executors
 Exception in thread Yarn application state monitor 
 org.apache.spark.SparkException: Error asking standalone schedule
 r to shut down executors
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBacken
 d.scala:261)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:2
 66)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
 at 
 org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
 at 
 org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
 at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:
 139)
 Caused by: java.lang.InterruptedException
 at 
 

[jira] [Comment Edited] (SPARK-9485) Failed to connect to yarn / spark-submit --master yarn-client

2015-07-30 Thread Philip Adetiloye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14648233#comment-14648233
 ] 

Philip Adetiloye edited comment on SPARK-9485 at 7/30/15 8:17 PM:
--

[~srowen] Thanks for the quick reply. It actually consistent (everytime) and 
here is the details of my configuration.

conf/spark-env.sh basically has this settings:

#!/usr/bin/env bash
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_YARN_QUEUE=dev

and my conf/slaves
10.0.0.204
10.0.0.205

~/.profile contains my settings here:


export JAVA_HOME=$(readlink -f  /usr/share/jdk1.8.0_45/bin/java | sed 
s:bin/java::)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_YARN_HOME=$HADOOP_INSTALL
export HADOOP_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export YARN_CONF_DIR=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
export HADOOP_OPTS=$HADOOP_OPTS 
-Djava.library.path=/usr/local/hadoop/lib/native

export PATH=$PATH:/usr/local/spark/sbin
export PATH=$PATH:/usr/local/spark/bin
export 
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:/usr/local/hadoop/lib/native/

export SCALA_HOME=/usr/local/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH


Hope this helps.

Thanks,
 Phil


was (Author: pkadetiloye):
[~srowen] Thanks for the quick reply. It actually consistent (everytime) and 
here is the details of my configuration.

conf/spark-env.sh basically has this settings:

#!/usr/bin/env bash
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
SPARK_YARN_QUEUE=dev

and my conf/slaves
10.0.0.204
10.0.0.205

~/.profile contains my settings here:


export JAVA_HOME=$(readlink -f  /usr/share/jdk1.8.0_45/bin/java | sed 
s:bin/java::)
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_YARN_HOME=$HADOOP_INSTALL
export HADOOP_HOME=$HADOOP_INSTALL
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export YARN_CONF_DIR=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
export HADOOP_OPTS=$HADOOP_OPTS 
-Djava.library.path=/usr/local/hadoop/lib/native

export PATH=$PATH:/usr/local/spark/sbin
export PATH=$PATH:/usr/local/spark/bin
export 
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:/usr/local/hadoop/lib/native/

export SCALA_HOME=/usr/local/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH


Hope this helps.

Thanks,
- Phil

 Failed to connect to yarn / spark-submit --master yarn-client
 -

 Key: SPARK-9485
 URL: https://issues.apache.org/jira/browse/SPARK-9485
 Project: Spark
  Issue Type: Bug
  Components: Spark Shell, Spark Submit, YARN
Affects Versions: 1.4.1
 Environment: DEV
Reporter: Philip Adetiloye
Priority: Minor

 Spark-submit throws an exception when connecting to yarn but it works when  
 used in standalone mode.
 I'm using spark-1.4.1-bin-hadoop2.6 and also tried compiling from source but 
 got the same exception below.
 spark-submit --master yarn-client
 Here is a stack trace of the exception:
 15/07/29 17:32:15 INFO scheduler.DAGScheduler: Stopping DAGScheduler
 15/07/29 17:32:15 INFO cluster.YarnClientSchedulerBackend: Shutting down all 
 executors
 Exception in thread Yarn application state monitor 
 org.apache.spark.SparkException: Error asking standalone schedule
 r to shut down executors
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBacken
 d.scala:261)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:2
 66)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
 at 
 org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
 at 
 org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
 at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:
 139)
 Caused by: java.lang.InterruptedException
 at