Thanks Jeff.

No, I was not suggesting any change.

Just needed to understand some unusual behaviors.

Thanks for your help.
Sanjay

On Mon, May 28, 2018 at 8:39 AM, Jeff Zhang <[email protected]> wrote:

> %spark.dep is only for spark interpreter. Although we could introduce the
> same thing for jdbc interpreter, but I don't see much benefit compared to
> setting driver in interpreter setting page. Because jdbc interpreter is
> different from spark interpreter, the driver jar for jdbc is not adhoc
> compared to spark interpreter, if user want to use jdbc interpreter to
> connect to different databases, he has to make custom setting in
> interpreter setting for the specific database. And it is recommended to
> create different interpreter for each database regarding this doc
> https://zeppelin.apache.org/docs/0.8.0-SNAPSHOT/
> interpreter/jdbc.html#create-a-new-jdbc-interpreter
>
>
>
> Sanjay Dasgupta <[email protected]>于2018年5月28日周一 上午10:59写道:
>
> > Hi Jeff,
> >
> > Apologies for raising a somewhat unrelated issue:
> >
> > Should the JDBC interpreter also be able to find a driver if the JAR file
> > containing the driver is loaded using the dep interpreter (instead of
> > defining it as an "artifact" at the bottom of the interpreter
> configuration
> > page)?
> >
> > Thanks,
> > Sanjay
> >
> > On Mon, May 28, 2018 at 7:54 AM, Jeff Zhang <[email protected]> wrote:
> >
> > > And this seems a bug in the DepInterpreter, I have created ticket
> > > https://issues.apache.org/jira/browse/ZEPPELIN-3506
> > >
> > >
> > >
> > > Jeff Zhang <[email protected]>于2018年5月28日周一 上午8:24写道:
> > >
> > > >
> > > > You can use %spark.conf to add custom jars to spark interpreter.
> > > > %spark.conf is more powerful that it could not customize jars and
> also
> > > > other spark configurations.
> > > >
> > > > e.g.
> > > >
> > > > %spark.conf
> > > >
> > > > spark.jars /tmp/product.jar
> > > >
> > > >
> > > > See the Generic ConfInterpeter Section of this article
> > > > https://medium.com/@zjffdu/zeppelin-0-8-0-new-features-ea53e8810235
> > > >
> > > > Yohana Khoury <[email protected]>于2018年5月27日周日 下午10:22写道:
> > > >
> > > >> Hi,
> > > >>
> > > >> I am trying to run the following on Zeppelin 0.8.0-RC2, Spark 2.3:
> > > >>
> > > >> *%dep*
> > > >> *z.load("/tmp/product.jar")*
> > > >>
> > > >> and then
> > > >>
> > > >> *%spark*
> > > >> *import model.v1._*
> > > >>
> > > >> The result is:
> > > >>
> > > >> *<console>:25: error: not found: value model*
> > > >> *       import model.v1._*
> > > >> *              ^*
> > > >>
> > > >>
> > > >> It appears that Zeppelin does not load the jar file into the spark
> > > >> interpreter.
> > > >> Are you dropping this option from Zeppelin 0.8.0 ? Is it going to be
> > > >> fixed?
> > > >>
> > > >> It worked with Zeppelin 0.7.2.
> > > >>
> > > >>
> > > >> Thanks!
> > > >>
> > > >
> > >
> >
>

Reply via email to