I cannot get a lot of info from these logs but it surely seems like yarn
setup issue. Did you try the local mode to check if it works -

./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master local[4] \
> spark-examples-1.6.1-hadoop2.6.0.jar 10


Note - the jar is a local one

On Wed, Jun 22, 2016 at 4:50 PM, 另一片天 <958943...@qq.com> wrote:

>  Application application_1466568126079_0006 failed 2 times due to AM
> Container for appattempt_1466568126079_0006_000002 exited with exitCode: 1
> For more detailed output, check application tracking page:
> http://master:8088/proxy/application_1466568126079_0006/Then, click on
> links to logs of each attempt.
> Diagnostics: Exception from container-launch.
> Container id: container_1466568126079_0006_02_000001
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> Failing this attempt. Failing the application.
>
> but command get error
>
> shihj@master:~/workspace/hadoop-2.6.4$ yarn logs -applicationId
> application_1466568126079_0006
> Usage: yarn [options]
>
> yarn: error: no such option: -a
>
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<yash...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:46
> *收件人:* "另一片天"<958943...@qq.com>;
> *抄送:* "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>;
>
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Are you able to run anything else on the cluster, I suspect its yarn that
> not able to run the class. If you could just share the logs in pastebin we
> could confirm that.
>
> On Wed, Jun 22, 2016 at 4:43 PM, 另一片天 <958943...@qq.com> wrote:
>
>> i  want to avoid Uploading resource file (especially jar package),because
>> them very big,the application will wait for too long,there are good method??
>> so i config that para, but not get the my want to effect。
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Yash Sharma";<yash...@gmail.com>;
>> *发送时间:* 2016年6月22日(星期三) 下午2:34
>> *收件人:* "另一片天"<958943...@qq.com>;
>> *抄送:* "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>;
>>
>> *主题:* Re: Could not find or load main class
>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> Try with : --master yarn-cluster
>>
>> On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <958943...@qq.com> wrote:
>>
>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>> --executor-cores 2
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>> 10
>>> Warning: Skip remote jar
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Yash Sharma";<yash...@gmail.com>;
>>> *发送时间:* 2016年6月22日(星期三) 下午2:28
>>> *收件人:* "另一片天"<958943...@qq.com>;
>>> *抄送:* "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<
>>> user@spark.apache.org>;
>>> *主题:* Re: Could not find or load main class
>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> Or better , try the master as yarn-cluster,
>>>
>>> ./bin/spark-submit \
>>> --class org.apache.spark.examples.SparkPi \
>>> --master yarn-cluster \
>>> --driver-memory 512m \
>>> --num-executors 2 \
>>> --executor-memory 512m \
>>> --executor-cores 2 \
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>>> 1-hadoop2.6.0.jar
>>>
>>> On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <958943...@qq.com> wrote:
>>>
>>>> Is it able to run on local mode ?
>>>>
>>>> what mean?? standalone mode ?
>>>>
>>>>
>>>> ------------------ 原始邮件 ------------------
>>>> *发件人:* "Yash Sharma";<yash...@gmail.com>;
>>>> *发送时间:* 2016年6月22日(星期三) 下午2:18
>>>> *收件人:* "Saisai Shao"<sai.sai.s...@gmail.com>;
>>>> *抄送:* "另一片天"<958943...@qq.com>; "user"<user@spark.apache.org>;
>>>> *主题:* Re: Could not find or load main class
>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>
>>>> Try providing the jar with the hdfs prefix. Its probably just because
>>>> its not able to find the jar on all nodes.
>>>>
>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>>>> 1-hadoop2.6.0.jar
>>>>
>>>> Is it able to run on local mode ?
>>>>
>>>> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sai.sai.s...@gmail.com>
>>>> wrote:
>>>>
>>>>> spark.yarn.jar (none) The location of the Spark jar file, in case
>>>>> overriding the default location is desired. By default, Spark on YARN will
>>>>> use a Spark jar installed locally, but the Spark jar can also be in a
>>>>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>>>>> that it doesn't need to be distributed each time an application runs. To
>>>>> point to a jar on HDFS, for example, set this configuration to
>>>>> hdfs:///some/path.
>>>>>
>>>>> spark.yarn.jar is used for spark run-time system jar, which is spark
>>>>> assembly jar, not the application jar (example-assembly jar). So in your
>>>>> case you upload the example-assembly jar into hdfs, in which spark system
>>>>> jars are not packed, so ExecutorLaucher cannot be found.
>>>>>
>>>>> Thanks
>>>>> Saisai
>>>>>
>>>>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <958943...@qq.com> wrote:
>>>>>
>>>>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>>>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>>>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>>>>> --executor-cores 2
>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>>>>> Warning: Local jar
>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not 
>>>>>> exist,
>>>>>> skipping.
>>>>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>>>> at java.lang.Class.forName0(Native Method)
>>>>>> at java.lang.Class.forName(Class.java:348)
>>>>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>>>>> at
>>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>>>>> at
>>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>>> get error at once
>>>>>> ------------------ 原始邮件 ------------------
>>>>>> *发件人:* "Yash Sharma";<yash...@gmail.com>;
>>>>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>>>>> *收件人:* "另一片天"<958943...@qq.com>;
>>>>>> *抄送:* "user"<user@spark.apache.org>;
>>>>>> *主题:* Re: Could not find or load main class
>>>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>>
>>>>>> How about supplying the jar directly in spark submit -
>>>>>>
>>>>>> ./bin/spark-submit \
>>>>>>> --class org.apache.spark.examples.SparkPi \
>>>>>>> --master yarn-client \
>>>>>>> --driver-memory 512m \
>>>>>>> --num-executors 2 \
>>>>>>> --executor-memory 512m \
>>>>>>> --executor-cores 2 \
>>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <958943...@qq.com> wrote:
>>>>>>
>>>>>>> i  config this  para  at spark-defaults.conf
>>>>>>> spark.yarn.jar
>>>>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>>>
>>>>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>>>>> --executor-memory 512m --executor-cores 2    10:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    - Error: Could not find or load main class
>>>>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>>>
>>>>>>> but  i don't config that para ,there no error  why???that para is
>>>>>>> only avoid Uploading resource file(jar package)??
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to