Hi I have got the prebuilt version of Spark 1.5 for Hadoop 2.6 (
http://www.apache.org/dyn/closer.lua/spark/spark-1.5.1/spark-1.5.1-bin-hadoop2.6.tgz)
working with CDH 5.4.0 in local mode on a cluster with Kerberos. It works
well including connecting to the Hive metastore. I am facing an issue
running spark jobs in yarn-client/yarn-cluster mode. The executors fail to
start as java cannot find ExecutorLauncher. Error: Could not find or load
main class org.apache.spark.deploy.yarn.ExecutorLauncher client token:
N/Adiagnostics:
Application application_1443531450011_13437 failed 2 times due to AM
Container for appattempt_1443531450011_13437_000002 exited with
exitCode: 1Stack
trace: ExitCodeException exitCode=1:at
org.apache.hadoop.util.Shell.runCommand(Shell.java:538)at
org.apache.hadoop.util.Shell.run(Shell.java:455)at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)at
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:293)at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)at
java.util.concurrent.FutureTask.run(FutureTask.java:262)at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at
java.lang.Thread.run(Thread.java:745) Any ideas as to what might be going
wrong. Also how can I turn on more detailed logging to see what command
line is being run by Yarn to launch containers? RegardsDeenar

Reply via email to