I was able to solve this problem hard coding the JAVA_HOME inside
org.apache.spark.deploy.yarn.Client.scala class.




*val commands = prefixEnv ++ Seq(--
YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) +
"/bin/java", "-server"++ "/usr/java/jdk1.7.0_51/bin/java", "-server")*

Somehow {{JAVA_HOME}}  was not getting resolved in the node of yarn
container. This change has fixed the problem. Now I am getting a new
error.

*Container: container_1430123808466_36297_02_000001
===============================================================================================
LogType: stderr
LogLength: 87
Log Contents:
Error: Could not find or load main class
org.apache.spark.deploy.yarn.ExecutorLauncher

LogType: stdout
LogLength: 0
Log Contents:*

Looks like now classpath variables are not resolved in yarn node. I
have mapreduce jobs running in the same cluster  working without any
problem. Any pointer why this could happen?


Thanks

Sourabh


On Fri, Apr 24, 2015 at 3:52 PM, sourabh chaki <chaki.sour...@gmail.com>
wrote:

> Yes Akhil. This is the same issue. I have updated my comment in that
> ticket.
>
> Thanks
> Sourabh
>
> On Fri, Apr 24, 2015 at 12:02 PM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>> Isn't this related to this
>> https://issues.apache.org/jira/browse/SPARK-6681
>>
>> Thanks
>> Best Regards
>>
>> On Fri, Apr 24, 2015 at 11:40 AM, sourabh chaki <chaki.sour...@gmail.com>
>> wrote:
>>
>>> I am also facing the same problem with spark 1.3.0 and yarn-client and
>>> yarn-cluster mode. Launching yarn container failed and this is the error in
>>> stderr:
>>>
>>> Container: container_1429709079342_65869_01_000001
>>>
>>> ===============================================================================================
>>> LogType: stderr
>>> LogLength: 61
>>> Log Contents:
>>> /bin/bash: {{JAVA_HOME}}/bin/java: No such file or directory
>>>
>>> LogType: stdout
>>> LogLength: 0
>>> Log Contents:
>>>
>>> I have added JAVA_HOME in hadoop-env.sh as well spark-env.sh
>>> grep JAVA_HOME /etc/hadoop/conf.cloudera.yarn/hadoop-env.sh
>>> export JAVA_HOME=/usr/java/default
>>> export PATH=$PATH:$JAVA_HOME/bin/java
>>>
>>> grep JAVA_HOME /var/spark/spark-1.3.0-bin-hadoop2.4/conf/spark-env.sh
>>> export JAVA_HOME="/usr/java/default"
>>>
>>> I could see another thread for the same problem but I dont see any
>>> solution.
>>>
>>> http://stackoverflow.com/questions/29170280/java-home-error-with-upgrade-to-spark-1-3-0
>>>  Any pointer will be helpful.
>>>
>>> Thanks
>>> Sourabh
>>>
>>>
>>> On Thu, Apr 2, 2015 at 1:23 PM, 董帅阳 <917361...@qq.com> wrote:
>>>
>>>> spark 1.3.0
>>>>
>>>>
>>>> spark@pc-zjqdyyn1:~> tail /etc/profile
>>>> export JAVA_HOME=/usr/jdk64/jdk1.7.0_45
>>>> export PATH=$PATH:$JAVA_HOME/bin
>>>>
>>>> #
>>>> # End of /etc/profile
>>>> #‍
>>>>
>>>>
>>>> But ERROR LOG
>>>>
>>>> Container: container_1427449644855_0092_02_000001 on pc-zjqdyy04_45454
>>>> ========================================================================
>>>> LogType: stderr
>>>> LogLength: 61
>>>> Log Contents:
>>>> /bin/bash: {{JAVA_HOME}}/bin/java: No such file or directory
>>>>
>>>> LogType: stdout
>>>> LogLength: 0
>>>> Log Contents:‍
>>>>
>>>
>>>
>>
>

Reply via email to