Re: java.lang.NullpointerException

2017-04-22 Thread Jianfeng (Jeff) Zhang
The message is clear that you are setting both spark.driver.extraJavaOptions and SPARK_JAVA_OPTS. Please check spark-defaults.conf and interpreter setting. Best Regard, Jeff Zhang From: kant kodali mailto:kanth...@gmail.com>> Reply-To: "users@zeppelin.apache.org

Re: java.lang.NullpointerException

2017-04-22 Thread kant kodali
FYI: I am using Spark Standalone mode On Sat, Apr 22, 2017 at 10:57 PM, kant kodali wrote: > Hi All, > > I get the below stack trace when I am using Zeppelin. If I don't use > Zeppelin all my client Jobs are running fine. I am using spark 2.1.0 > > I am not sure why Zeppelin is unable to create

java.lang.NullpointerException

2017-04-22 Thread kant kodali
Hi All, I get the below stack trace when I am using Zeppelin. If I don't use Zeppelin all my client Jobs are running fine. I am using spark 2.1.0 I am not sure why Zeppelin is unable to create a SparkContext ? but then it says it created SparkSession so it doesn't seem to make a lot of sense. any

Re: Data Source Authorization - JDBC Credential

2017-04-22 Thread moon soo Lee
Hmm that's strange. I can see 0.7.1 works as expected when I remove default.user/default.password and then set credential. Also i can set default.user, default.password without using credential menu and it works as expected as well. Have you tried restart interpreter after create/update entity in

Re: Custom spark for zeppelin and interpreter-list

2017-04-22 Thread moon soo Lee
Hi, 'conf/interpreter-list' is just catalogue file that `/bin/install-interpreter.sh' uses. The information is not being used any other place. '/bin/install-interpreter.sh' use 'conf/interpreter-list' to 1) print list of interpreter that Zeppelin community provides 2) convert short name to group:

Custom spark for zeppelin and interpreter-list

2017-04-22 Thread Serega Sheypak
Hi, I have few concerns I can't resolve right now. I definitely can go though the source code and find the solution, but I would like to understand the idea behind. I'm building Zeppelin from sources using 0.8.0-SNAPSHOT. I do build it with custom cloudera CDH spark 2.0-something. I can't understan

Re: Preconfigure Spark interpreter

2017-04-22 Thread Paul Brenner
Whenever I’ve wanted to do this (preconfigure an interpreter using interpreter.json instead of one by one adding each config into the webui) my process was First create an interpreter in the webUI and enter all my configs into that interpreter via the webUI In the webUI again,  create the next

Re: Preconfigure Spark interpreter

2017-04-22 Thread Serega Sheypak
Aha, thanks. I'm building Zeppelin from source, so I can put my custom settings directly? BTW, why does interpreter-list file don't contain spark interpreter? 2017-04-22 13:33 GMT+02:00 Fabian Böhnlein : > Do it via the Ui once and you'll see how interpreter.json of the Zeppelin > installation w

Re: struggling with LDAP

2017-04-22 Thread moon soo Lee
Hi Paul, Knapp, Please don't mind update LDAP documentation if you would like to. That would save many people! Documentation (published in zeppelin website) is also part of opensource and you can update them by making pull request. I think related file is https://github.com/apache/zeppelin/blob/m

Re: Data Source Authorization - JDBC Credential

2017-04-22 Thread Paul Brenner
Using 0.7.1 and yes I tried removing default.user/default.password after setting my credentials in the credentials section. It did not work.  I found that actually setting the correct value for default.user did not work either. Same error. It seems like the zeppelin jdbc interpreter is not passi

Re: Preconfigure Spark interpreter

2017-04-22 Thread Fabian Böhnlein
Do it via the Ui once and you'll see how interpreter.json of the Zeppelin installation will be changed. On Sat, Apr 22, 2017, 11:35 Serega Sheypak wrote: > Hi, I need to pre-configure spark interpreter with my own artifacts and > internal repositories. How can I do it? >

Preconfigure Spark interpreter

2017-04-22 Thread Serega Sheypak
Hi, I need to pre-configure spark interpreter with my own artifacts and internal repositories. How can I do it?