Hi David,

Since you are working v0.6.0 of zeppelin and it is still in beta stage. To 
include all the required jars in the master setup, it might take some time. I 
don't have any idea of which I location you can give in HDFS. I'm sorry for 
that and I'm a noob like you..:)

Why don't you try with v0.5.5 of zeppelin ?

Thanks,
Snehit
________________________________
From: David Klim [davidkl...@hotmail.com]
Sent: 04 January 2016 20:43:05
To: users@zeppelin.incubator.apache.org
Subject: Providing jars in HDFS

Hello,

I have bee running Zeppelin in yarn-client mode, and I so far I was copying 
required jars to the folder specified by spark.home 
(/opt/zeppelin/interpreter/spark/) on each cluster node. Is it possible to 
specify some HDFS location to load the jars from there instead? How can I 
configure that

Thanks!

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.

Reply via email to