Try set HADOOP_CONF_DIR for each yarn conf in interpreter setting.

Serega Sheypak <serega.shey...@gmail.com>于2017年6月30日周五 下午10:11写道:

> Hi I have several different hadoop clusters, each of them has it's own
> YARN.
> Is it possible to configure single Zeppelin instance to work with
> different clusters?
> I want to run spark on cluster A if data is there. Right now my Zeppelin
> runs on single cluster and it sucks data from remote clusters which is
> inefficient. Zeppelin can easily access any HDFS cluster, but what about
> YARN?
>
> What are the correct approaches to solve the problem?
>

Reply via email to