Hi,

Add HADOOP_HOME=/path/to/hadoop/folder in /etc/default/mesos-slave in all
mesos agents and restart mesos

Regards,
Meethu Mathew


On Thu, Nov 10, 2016 at 4:57 PM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi Guys,
>
> I failed to launch spark jobs on mesos. Actually I submitted the job to
> cluster successfully.
>
> But the job failed to run.
>
> I1110 18:25:11.095507   301 fetcher.cpp:498] Fetcher Info:
> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/1f8e621b-3cbf-4b86-a1c1-
> 9e2cf77265ee-S7\/root","items":[{"action":"BYPASS_CACHE","
> uri":{"extract":true,"value":"hdfs:\/\/192.168.111.74:9090\/
> bigdata\/package\/spark-examples_2.11-2.0.1.jar"}}],"
> sandbox_directory":"\/var\/lib\/mesos\/agent\/slaves\/
> 1f8e621b-3cbf-4b86-a1c1-9e2cf77265ee-S7\/frameworks\/
> 1f8e621b-3cbf-4b86-a1c1-9e2cf77265ee-0002\/executors\/
> driver-20161110182510-0001\/runs\/b561328e-9110-4583-b740-
> 98f9653e7fc2","user":"root"}
> I1110 18:25:11.099799   301 fetcher.cpp:409] Fetching URI 'hdfs://
> 192.168.111.74:9090/bigdata/package/spark-examples_2.11-2.0.1.jar'
> I1110 18:25:11.099820   301 fetcher.cpp:250] Fetching directly into the
> sandbox directory
> I1110 18:25:11.099862   301 fetcher.cpp:187] Fetching URI 'hdfs://
> 192.168.111.74:9090/bigdata/package/spark-examples_2.11-2.0.1.jar'
> E1110 18:25:11.101842   301 shell.hpp:106] Command 'hadoop version 2>&1'
> failed; this is the output:
> sh: hadoop: command not found
> Failed to fetch 'hdfs://192.168.111.74:9090/bigdata/package/spark-
> examples_2.11-2.0.1.jar': Failed to create HDFS client: Failed to execute
> 'hadoop version 2>&1'; the command was either not found or exited with a
> non-zero exit status: 127
> Failed to synchronize with agent (it's probably exited
>
> Actually I installed hadoop on each agent node.
>
>
> Any advice?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>

Reply via email to