What is the best way to configure spark interpreter ?

Should I use zeppelin-env.sh and a very long line of "export
SPARK_SUBMIT_OPTIONS"

or configure interpreter.json before launching Zeppelin daemon

It seems interpreter.json is not read at zeppelin launch,I need to manually
go to settings web UI,edit the spark interpreter and restart it...

2016-02-23 15:15 GMT+01:00 vincent gromakowski <
vincent.gromakow...@gmail.com>:

> Hi,
> I am trying to automatcally add jars to spark interpreter with several
> methods but I cannot achieve it.
> I am currently generating an interpreter.json file from ansible templates
> before launching Zeppelin in Marathon.
> 1.  spark.jars
> 2.  spark.driver.extraClassPath
> 3.  groupArtifactVersion  (dependency loading)
>
> In all case I get a class not found exception for the spark cassandra
> connector. The only way to make it works is to go to interpreter settings,
> edit spark settings, then save and restart the interpreter but it's not
> automatic at all as we need to do it each time Zeppelin is started.
>
> Is the interpreter.json file automatically loaded at the start of Zeppelin
> ?
>

Reply via email to