Hello.
You should set the HADOOP_CONF_DIR to /usr/local/lib/hadoop/etc/hadoop/ in
the conf/zeppelin-env.sh.
Thanks.
2016년 11월 2일 (수) 오전 5:07, Benoit Hanotte <benoit.h...@gmail.com>님이 작성:

> Hello,
>
> I'd like to use zeppelin on my local computer and use it to run spark
> executors on a distant yarn cluster since I can't easily install zeppelin
> on the cluster gateway.
>
> I installed the correct hadoop version (2.6), and compiled zeppelin (from
> the master branch) as following:
>
> *mvn clean package -DskipTests -Phadoop-2.6
> -Dhadoop.version=2.6.0-cdh5.5.0 -Pyarn -Pspark-2.0 -Pscala-2.11*
>
> I also set HADOOP_HOME_DIR to /usr/local/lib/hadoop where my hadoop is
> installed (I also tried with /usr/local/lib/hadoop/etc/hadoop/ where the
> conf files such as yarn-site.xml are). I set yarn.resourcemanager.hostname
> to the resource manager of the cluster (I copied the value from the config
> file on the cluster) but when I start a spark command it still tries to
> connect to 0.0.0.0:8032 as one can see in the logs:
>
> *INFO [2016-11-01 20:48:26,581] ({pool-2-thread-2}
> Client.java[handleConnectionFailure]:862) - Retrying connect to server:
> 0.0.0.0/0.0.0.0:8032 <http://0.0.0.0/0.0.0.0:8032>. Already tried 9
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1000 MILLISECONDS)*
>
> Am I missing something something? Is there any additional parameters to
> set?
>
> Thanks!
>
> Benoit
>
>
>

Reply via email to