Re: JAVA_HOME problem with upgrade to 1.3.0

2015-03-23 Thread Williams, Ken


 From: Williams, Ken Williams 
 ken.willi...@windlogics.commailto:ken.willi...@windlogics.com
 Date: Thursday, March 19, 2015 at 10:59 AM
 To: Spark list user@spark.apache.orgmailto:user@spark.apache.org
 Subject: JAVA_HOME problem with upgrade to 1.3.0

 […]
 Finally, I go and check the YARN app master’s web interface (since the job is 
 shown, I know it at least made it that far), and the
 only logs it shows are these:

 Log Type: stderr
 Log Length: 61
 /bin/bash: {{JAVA_HOME}}/bin/java: No such file or directory

 Log Type: stdout
 Log Length: 0

I’m still interested in a solution to this issue if anyone has comments.  I 
also posted to SO if that’s more convenient:


http://stackoverflow.com/questions/29170280/java-home-error-with-upgrade-to-spark-1-3-0

Thanks,

  -Ken




CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. Any 
unauthorized review, use, disclosure or distribution of any kind is strictly 
prohibited. If you are not the intended recipient, please contact the sender 
via reply e-mail and destroy all copies of the original message. Thank you.


JAVA_HOME problem with upgrade to 1.3.0

2015-03-19 Thread Williams, Ken
I’m trying to upgrade a Spark project, written in Scala, from Spark 1.2.1 to 
1.3.0, so I changed my `build.sbt` like so:

   -libraryDependencies += org.apache.spark %% spark-core % 1.2.1 % 
“provided
   +libraryDependencies += org.apache.spark %% spark-core % 1.3.0 % 
provided

then make an `assembly` jar, and submit it:

   HADOOP_CONF_DIR=/etc/hadoop/conf \
spark-submit \
--driver-class-path=/etc/hbase/conf \
--conf spark.hadoop.validateOutputSpecs=false \
--conf 
spark.yarn.jar=hdfs:/apps/local/spark-assembly-1.3.0-hadoop2.4.0.jar \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
--deploy-mode=cluster \
--master=yarn \
--class=TestObject \
--num-executors=54 \
target/scala-2.11/myapp-assembly-1.2.jar

The job fails to submit, with the following exception in the terminal:

15/03/19 10:30:07 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1420225286501_4699 failed 2 times due to 
AM Container for appattempt_1420225286501_4699_02 exited with  exitCode: 
127 due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

Finally, I go and check the YARN app master’s web interface (since the job is 
shown, I know it at least made it that far), and the only logs it shows are 
these:

Log Type: stderr
Log Length: 61
/bin/bash: {{JAVA_HOME}}/bin/java: No such file or directory

Log Type: stdout
Log Length: 0

I’m not sure how to interpret that – is '{{JAVA_HOME}}' a literal (including 
the brackets) that’s somehow making it into a script?  Is this coming from the 
worker nodes or the driver?  Anything I can do to experiment  troubleshoot?

  -Ken





CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. Any 
unauthorized review, use, disclosure or distribution of any kind is strictly 
prohibited. If you are not the intended recipient, please contact the sender 
via reply e-mail and destroy all copies of the original message. Thank you.


Re: JAVA_HOME problem with upgrade to 1.3.0

2015-03-19 Thread Ted Yu
JAVA_HOME, an environment variable, should be defined on the node where
appattempt_1420225286501_4699_02 ran.

Cheers

On Thu, Mar 19, 2015 at 8:59 AM, Williams, Ken ken.willi...@windlogics.com
wrote:

  I’m trying to upgrade a Spark project, written in Scala, from Spark
 1.2.1 to 1.3.0, so I changed my `build.sbt` like so:

 -libraryDependencies += org.apache.spark %% spark-core % 1.2.1
 % “provided
+libraryDependencies += org.apache.spark %% spark-core % 1.3.0 %
 provided

  then make an `assembly` jar, and submit it:

 HADOOP_CONF_DIR=/etc/hadoop/conf \
 spark-submit \
 --driver-class-path=/etc/hbase/conf \
 --conf spark.hadoop.validateOutputSpecs=false \
 --conf
 spark.yarn.jar=hdfs:/apps/local/spark-assembly-1.3.0-hadoop2.4.0.jar \
 --conf spark.serializer=org.apache.spark.serializer.KryoSerializer
 \
 --deploy-mode=cluster \
 --master=yarn \
 --class=TestObject \
 --num-executors=54 \
 target/scala-2.11/myapp-assembly-1.2.jar

  The job fails to submit, with the following exception in the terminal:

  15/03/19 10:30:07 INFO yarn.Client:
 client token: N/A
 diagnostics: Application application_1420225286501_4699 failed 2 times due
 to AM Container for appattempt_1420225286501_4699_02 exited with
  exitCode: 127 due to: Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
 at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

  Finally, I go and check the YARN app master’s web interface (since the
 job is shown, I know it at least made it that far), and the only logs it
 shows are these:

  Log Type: stderr
 Log Length: 61
 /bin/bash: {{JAVA_HOME}}/bin/java: No such file or directory

 Log Type: stdout
 Log Length: 0

  I’m not sure how to interpret that – is '{{JAVA_HOME}}' a literal
 (including the brackets) that’s somehow making it into a script?  Is this
 coming from the worker nodes or the driver?  Anything I can do to
 experiment  troubleshoot?

-Ken



 --

 CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the
 intended recipient(s) and may contain confidential and privileged
 information. Any unauthorized review, use, disclosure or distribution of
 any kind is strictly prohibited. If you are not the intended recipient,
 please contact the sender via reply e-mail and destroy all copies of the
 original message. Thank you.



Re: JAVA_HOME problem with upgrade to 1.3.0

2015-03-19 Thread Williams, Ken

 From: Ted Yu yuzhih...@gmail.commailto:yuzhih...@gmail.com
 Date: Thursday, March 19, 2015 at 11:05 AM

 JAVA_HOME, an environment variable, should be defined on the node where 
 appattempt_1420225286501_4699_02 ran.

Has this behavior changed in 1.3.0 since 1.2.1 though?  Using 1.2.1 and making 
no other changes, the job completes fine.

I do have JAVA_HOME set in the hadoop config files on all the nodes of the 
cluster:

% grep JAVA_HOME /etc/hadoop/conf/*.sh
/etc/hadoop/conf/hadoop-env.sh:export JAVA_HOME=/usr/jdk64/jdk1.6.0_31
/etc/hadoop/conf/yarn-env.sh:export JAVA_HOME=/usr/jdk64/jdk1.6.0_31

 -Ken




CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. Any 
unauthorized review, use, disclosure or distribution of any kind is strictly 
prohibited. If you are not the intended recipient, please contact the sender 
via reply e-mail and destroy all copies of the original message. Thank you.