Re: Optimizing Spark interpreter startup

2023-05-03 Thread Jeff Zhang
Hi Vladimir, Have you compared it with spark shell? I think it is similar as spark shell On Wed, May 3, 2023 at 10:12 PM Vladimir Prus wrote: > Hi, > > I was profiling the startup time of Spark Interpreter in our environment, > and it looks like > a total of 5 seconds is spent

Optimizing Spark interpreter startup

2023-05-03 Thread Vladimir Prus
Hi, I was profiling the startup time of Spark Interpreter in our environment, and it looks like a total of 5 seconds is spent at this line in SparkScala212Interpreter.scala: sparkILoop.initializeSynchronous() That line, eventually, calls nsc.Global constructor, which spends 5 seconds

Running spark interpreter in 0.10.0 docker image fails to delete files

2021-11-02 Thread brian
d and used, however I run into an issue with the Spark Interpreter. In the 0.10.0 docker image, the DockerInterpreter tries to delete all interpreters, that are not relevant for Spark. There I do get a permission denied message. The Dockerfile uses USER 1000 and the Zeppelin files in the image are owned by

Re: Scala 2.12 version mismatch for Spark Interpreter

2021-10-28 Thread Mich Talebzadeh
apologies should say the docker image should be on 3.1.1 view my Linkedin profile *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise f

Re: Scala 2.12 version mismatch for Spark Interpreter

2021-10-28 Thread Mich Talebzadeh
you should go for Spark 3.1.1 for k8s. That is the tried and tested one for Kubernetes in Spark 3 series, meaning the docker image should be on .1.1 and your client which I think is used to submit spark-submit on k8s should also be on 3.1.1 HTH view my Linkedin profile

Re: Scala 2.12 version mismatch for Spark Interpreter

2021-10-28 Thread Jeff Zhang
Hi Fabrizio, Spark 3.2.0 is supported recently in this PR https://github.com/apache/zeppelin/pull/4257 The problem you mentioned is solved. Fabrizio Fab 于2021年10月28日周四 下午7:43写道: > I am aware that Spark 3.20 is not officially released, but I am trying to > put it to work. > > The first thing tha

Scala 2.12 version mismatch for Spark Interpreter

2021-10-28 Thread Fabrizio Fab
I am aware that Spark 3.20 is not officially released, but I am trying to put it to work. The first thing that I noticed is the following: the SparkInterpreter is compiled for Scala 2.12.7 Spark 3.2 is compiled for Scala 2.12.15 Unfortunately there are some breaking changes between the two ver

Re: CVE-2019-10095: Apache Zeppelin: bash command injection in spark interpreter

2021-09-28 Thread Michiel Haisma
E List from the CNA. nvd.nist.gov Many thanks, Michiel On 2021/09/02 15:56:50, Jeff Zhang wrote: > Description:> > > bash command injection vulnerability in Apache Zeppelin allows an attacker to > inject system commands into Spark interpreter settings. This issue affects >

CVE-2019-10095: Apache Zeppelin: bash command injection in spark interpreter

2021-09-02 Thread Jeff Zhang
Description: bash command injection vulnerability in Apache Zeppelin allows an attacker to inject system commands into Spark interpreter settings. This issue affects Apache Zeppelin Apache Zeppelin version 0.9.0 and prior versions. Credit: Apache Zeppelin would like to thank HERE Security

Re: Local spark interpreter with extra java options

2021-07-25 Thread Lior Chaga
> is >>>>> logged. >>>>> >>>>> This is how my cmd looks like (censored a bit): >>>>> >>>>> /usr/local/spark/bin/spark-submit >>>>> --class org.apache.zeppelin.interpreter.remote.RemoteInterpr

Re: Local spark interpreter with extra java options

2021-07-25 Thread Jeff Zhang
>>> process command that is created, is not really identical to the one that is >>>> logged. >>>> >>>> This is how my cmd looks like (censored a bit): >>>> >>>> /usr/local/spark/bin/spark-submit >>>> --class org.apache.zeppelin.inter

Re: Local spark interpreter with extra java options

2021-07-25 Thread Lior Chaga
-class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer >>> --driver-class-path >>> :/zeppelin/local-repo/spark/*:/zeppelin/interpreter/spark/*:::/zeppelin/inter >>> preter/zeppelin-interpreter-shaded-0.10.0-SNAPSHOT.jar:/zeppelin/interpreter/s

Re: Local spark interpreter with extra java options

2021-07-11 Thread Lior Chaga
/interpreter/spark/*:::/zeppelin/inter >> preter/zeppelin-interpreter-shaded-0.10.0-SNAPSHOT.jar:/zeppelin/interpreter/spark/spark-interpreter-0.10.0-SNAPSHOT.jar:/etc/hadoop/conf >> >> *--driver-java-options " -DSERVICENAME=zeppelin_docker >> -Dfile.

Re: Local spark interpreter with extra java options

2021-07-11 Thread Jeff Zhang
driver-class-path > :/zeppelin/local-repo/spark/*:/zeppelin/interpreter/spark/*:::/zeppelin/inter > preter/zeppelin-interpreter-shaded-0.10.0-SNAPSHOT.jar:/zeppelin/interpreter/spark/spark-interpreter-0.10.0-SNAPSHOT.jar:/etc/hadoop/conf > > *--driver-java-options " -DSERVICENAME=zepp

Re: Local spark interpreter with extra java options

2021-07-11 Thread Lior Chaga
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer --driver-class-path :/zeppelin/local-repo/spark/*:/zeppelin/interpreter/spark/*:::/zeppelin/inter preter/zeppelin-interpreter-shaded-0.10.0-SNAPSHOT.jar:/zeppelin/interpreter/spark/spark-interpreter-0.10.0-SNAPSHOT.jar:/etc/hadoop/conf *--driver-java-options " -DSERVICENAME=zeppelin_d

Re: Local spark interpreter with extra java options

2021-07-08 Thread Jeff Zhang
ER_RUN_COMMAND in interpreter.sh > > I will fix it in my build, but would like a confirmation that this is > indeed the issue (and I'm not missing anything), so I'd open a pull > request. > > On Thu, Jul 8, 2021 at 3:05 PM Lior Chaga wrote: > >> I'm trying

Re: Local spark interpreter with extra java options

2021-07-08 Thread Lior Chaga
is is indeed the issue (and I'm not missing anything), so I'd open a pull request. On Thu, Jul 8, 2021 at 3:05 PM Lior Chaga wrote: > I'm trying to run zeppelin using local spark interpreter. > Basically everything works, but if I try to set > `spar

Local spark interpreter with extra java options

2021-07-08 Thread Lior Chaga
I'm trying to run zeppelin using local spark interpreter. Basically everything works, but if I try to set `spark.driver.extraJavaOptions` or `spark.executor.extraJavaOptions` containing several arguments, I get an exception. For instance, for providing `-DmyParam=1 -DmyOtherParam=2`, I&

Re: Running spark interpreter with zeppelin on k8s

2021-06-23 Thread Lior Chaga
Chaga 于2021年6月23日周三 下午4:35写道: > >> I'm trying to deploy zeppelin 0.10 on k8s, using following manual build: >> >> mvn clean package -DskipTests -Pspark-scala-2.12 -Pinclude-hadoop >> -Pspark-3.0 -Phadoop2 -Pbuild-distr -pl >> zeppelin-interpreter,zeppelin-zengi

Re: Running spark interpreter with zeppelin on k8s

2021-06-23 Thread Jeff Zhang
-pl > zeppelin-interpreter,zeppelin-zengine,spark/interpreter,spark/spark-dependencies,zeppelin-web,zeppelin-server,zeppelin-distribion,jdbc,zeppelin-plugins/notebookrepo/filesystem,zeppelin-plugins/launcher/k8s-standard > -am > > > Spark itself is configured to use mesos as resourc

Running spark interpreter with zeppelin on k8s

2021-06-23 Thread Lior Chaga
I'm trying to deploy zeppelin 0.10 on k8s, using following manual build: mvn clean package -DskipTests -Pspark-scala-2.12 -Pinclude-hadoop -Pspark-3.0 -Phadoop2 -Pbuild-distr -pl zeppelin-interpreter,zeppelin-zengine,spark/interpreter,spark/spark-dependencies,zeppelin-web,zeppelin-s

Re: Custom init for Spark interpreter

2021-05-21 Thread Jeff Zhang
wrote: > >> Right,we have hooks for each paragraph execution, but no interpreter >> process level hook. Could you create a ticket for that ? And welcome to >> contribute. >> >> Vladimir Prus 于2021年5月21日周五 下午4:16写道: >> >>> >>> Hi, >>&g

Re: Custom init for Spark interpreter

2021-05-21 Thread Vladimir Prus
ocess level hook. Could you create a ticket for that ? And welcome to > contribute. > > Vladimir Prus 于2021年5月21日周五 下午4:16写道: > >> >> Hi, >> >> is there a way, when using Spark interpreter, to always run additional >> Scala code after startup? E

Re: Custom init for Spark interpreter

2021-05-21 Thread Jeff Zhang
Right,we have hooks for each paragraph execution, but no interpreter process level hook. Could you create a ticket for that ? And welcome to contribute. Vladimir Prus 于2021年5月21日周五 下午4:16写道: > > Hi, > > is there a way, when using Spark interpreter, to always run additional > S

Custom init for Spark interpreter

2021-05-21 Thread Vladimir Prus
Hi, is there a way, when using Spark interpreter, to always run additional Scala code after startup? E.g. I want to automatically execute import com.joom.whatever._ so that users don't have to do it all the time. I see that BaseSparkScalaInterpreter.spark2CreateContext imports a few

Re: Zeppelin 0.9 / Kubernetes / Spark interpreter

2021-05-03 Thread Jeff Zhang
It is fixed here https://github.com/apache/zeppelin/pull/4105 Sylvain Gibier 于2021年5月1日周六 下午2:37写道: > Hi, > > Cf. ZEPPELIN-5337. > > Switching to isolated mode is not really an option - as it means one spark > interpreter per note. per user -- which consumes a lot of resour

Re: Zeppelin 0.9 / Kubernetes / Spark interpreter

2021-04-30 Thread Sylvain Gibier
Hi, Cf. ZEPPELIN-5337. Switching to isolated mode is not really an option - as it means one spark interpreter per note. per user -- which consumes a lot of resources, as there is no mechanism to clean k8s pods created afterwards. The scope mode allows us to share the spark interpreter along with

Re: Zeppelin 0.9 / Kubernetes / Spark interpreter

2021-04-30 Thread moon soo Lee
wrote: > Any idea? > > Actually anyone using zeppelin 0.9+ on k8s, with spark interpreter scope > per note ? > > > On 2021/04/24 10:46:06, Sylvain Gibier wrote: > > Hi, > > > > we have an issue with our current deployment of zeppelin on k8s, and more > >

Re: Zeppelin 0.9 / Kubernetes / Spark interpreter

2021-04-27 Thread Sylvain Gibier
Any idea? Actually anyone using zeppelin 0.9+ on k8s, with spark interpreter scope per note ? On 2021/04/24 10:46:06, Sylvain Gibier wrote: > Hi, > > we have an issue with our current deployment of zeppelin on k8s, and more > precisely with spark interpreter. > > For ref

Zeppelin 0.9 / Kubernetes / Spark interpreter

2021-04-24 Thread Sylvain Gibier
Hi, we have an issue with our current deployment of zeppelin on k8s, and more precisely with spark interpreter. For reference - the spark context is: scala 2.12.10 / spark 2.4.7 We have a weird behaviour, running the spark interpreter in per note, scoped To reproduce currently - we restart the

Re: Spark interpreter Repl injection

2021-03-09 Thread Carlos Diogo
Thanks I created the issue Regards Carlos On Tue 9. Mar 2021 at 19:02, moon soo Lee wrote: > Pyspark interpreter have 'intp' variable exposed in its repl environment > (for internal use). And we can resolve reference to Spark interpreter from > the 'intp'

Re: Spark interpreter Repl injection

2021-03-09 Thread moon soo Lee
Pyspark interpreter have 'intp' variable exposed in its repl environment (for internal use). And we can resolve reference to Spark interpreter from the 'intp' variable. However, scala repl environment in Spark Interpreter doesn't expose any variables that is useful for

Re: Spark interpreter Repl injection

2021-03-09 Thread Carlos Diogo
Looks good Moon Is there a specific reason why you needed the pyspark interpreter to access the spark interpreter? Could not the spark interpreter programmatically access itself (and the same for the pyspark interpreter) Would the issue be to expose the z.interpret() method? Best regards Carlos

Re: Spark interpreter Repl injection

2021-03-09 Thread moon soo Lee
not work properly most of the times >>> Carlos >>> >>> On Sat 6. Mar 2021 at 11:36, Jeff Zhang wrote: >>> >>>> Why not copying scala code in zeppelin and run the notebook directly ? >>>> >>>> Carlos Diogo 于2021年3月6日周六 下午3:51写道

Re: Spark interpreter Repl injection

2021-03-08 Thread Carlos Diogo
. Mar 2021 at 11:36, Jeff Zhang wrote: >> >>> Why not copying scala code in zeppelin and run the notebook directly ? >>> >>> Carlos Diogo 于2021年3月6日周六 下午3:51写道: >>> >>>> Dear all >>>> I have been trying to find a was to inj

Re: Spark interpreter Repl injection

2021-03-08 Thread moon soo Lee
gt;>> I have been trying to find a was to inject scala Code ( from String) >>> into the spark interpreter >>> In pyspark is easy with the exec function >>> It should not be very difficult to access from the Note scala repl >>> interpreter but i could no

Re: Spark interpreter Repl injection

2021-03-06 Thread Carlos Diogo
n trying to find a was to inject scala Code ( from String) >> into the spark interpreter >> In pyspark is easy with the exec function >> It should not be very difficult to access from the Note scala repl >> interpreter but i could not find a way . I was even able to create

Re: Spark interpreter Repl injection

2021-03-06 Thread Jeff Zhang
Why not copying scala code in zeppelin and run the notebook directly ? Carlos Diogo 于2021年3月6日周六 下午3:51写道: > Dear all > I have been trying to find a was to inject scala Code ( from String) > into the spark interpreter > In pyspark is easy with the exec function > It sho

Spark interpreter Repl injection

2021-03-05 Thread Carlos Diogo
Dear all I have been trying to find a was to inject scala Code ( from String) into the spark interpreter In pyspark is easy with the exec function It should not be very difficult to access from the Note scala repl interpreter but i could not find a way . I was even able to create a new repl

spark.jars.packages not working in spark interpreter tutorial

2020-07-02 Thread David Boyd
All:    Trying to run the Spark Interpreter tutorial note. The spark.conf paragraph which specifies spark.jars.packages runs clean. But the next paragraph which tries to use the avro jar fails with a class not found for org.apache.spark.sql.avro.AvroFileFormat.DefaultSource Spark is set to

Re: Error starting spark interpreter with 0.9.0

2020-06-30 Thread Jeff Zhang
Which spark version do you use ? And could you check the spark interpreter log file ? It is in ZEPPELIN_HOME/logs/zeppelin-interpreter-spark-*.log David Boyd 于2020年6月30日周二 下午11:11写道: > All: > > Just trying to get 0.9.0 to work and running into all sorts of issues. > Previous

Error starting spark interpreter with 0.9.0

2020-06-30 Thread David Boyd
PSHOT-shaded.jar /opt/zeppelin/zeppelin-current/interpreter/zeppelin-interpreter-shaded-0.9.0-SNAPSHOT.jar:/opt/zeppelin/zeppelin-current/interpreter/spark/spark-interpreter-0.9.0-SNAPSHOT.jar:/opt/hadoop/hadoop-current/etc/hadoop" --driver-java-options " -Dfile.encoding=UTF-8 -Dlog4j.conf

Re: Question: Adding Dependencies in with the Spark Interpreter with Kubernetes

2020-05-14 Thread Sebastian Albrecht
Am Mi., 13. Mai 2020 um 21:59 Uhr schrieb Hetul Patel : > > Are dependency downloads supported with zeppelin and spark over > kubernetes? Or am I required to add the dependency jars directly to my > spark docker image and add them to the classpath? > > Hi Hetu, i don't use docker but to connect to

Question: Adding Dependencies in with the Spark Interpreter with Kubernetes

2020-05-13 Thread Hetul Patel
Hi all, I've been trying the 0.9.0-preview1 build on minikube with the spark interpreter. It's working, but I'm unable to work with any dependencies that I've added to the spark interpreter. (Note: I had to add `SPARK_SUBMIT_OPTIONS=--conf spark.jars.ivy=/tmp/.ivy` and `SPA

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-11-08 Thread Mark Bidewell
setting UI. > > >>>> > > >>>> Jeff Zhang 于2019年10月11日周五 上午9:54写道: > > >>>> > > >>>>> Like I said above, try to set them via spark.jars and > > >>>>> spark.jars.packages. > > >>>>> > > >>>

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-11-08 Thread Anton Kulaga
/ZEPPELIN-4374 that user can > >>>> still set dependencies in interpreter setting UI. > >>>> > >>>> Jeff Zhang 于2019年10月11日周五 上午9:54写道: > >>>> > >>>>> Like I said above, try to set them via spark.jars and > >>>

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-11 Thread Jeff Zhang
4写道: >>>> >>>>> Like I said above, try to set them via spark.jars and >>>>> spark.jars.packages. >>>>> >>>>> Don't set them here >>>>> >>>>> [image: image.png] >>>>> >>&g

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-11 Thread Mark Bidewell
; Don't set them here >>>> >>>> [image: image.png] >>>> >>>> >>>> Mark Bidewell 于2019年10月11日周五 上午9:35写道: >>>> >>>>> I was specifying them in the interpreter settings in the UI. >>>>> >&

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-11 Thread Jeff Zhang
ars and >>> spark.jars.packages. >>> >>> Don't set them here >>> >>> [image: image.png] >>> >>> >>> Mark Bidewell 于2019年10月11日周五 上午9:35写道: >>> >>>> I was specifying them in the interpreter settings in the UI.

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-11 Thread Mark Bidewell
月11日周五 上午9:35写道: >> >>> I was specifying them in the interpreter settings in the UI. >>> >>> On Thu, Oct 10, 2019 at 9:30 PM Jeff Zhang wrote: >>> >>>> How do you specify your spark interpreter dependencies ? You need to >>>> specify it vi

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-10 Thread Jeff Zhang
in the interpreter settings in the UI. >> >> On Thu, Oct 10, 2019 at 9:30 PM Jeff Zhang wrote: >> >>> How do you specify your spark interpreter dependencies ? You need to >>> specify it via property spark.jars or spark.jars.packages for non-local >>>

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-10 Thread Jeff Zhang
gt; >> How do you specify your spark interpreter dependencies ? You need to >> specify it via property spark.jars or spark.jars.packages for non-local >> model. >> >> Mark Bidewell 于2019年10月11日周五 上午3:45写道: >> >>> I am running some initial tests of Zeppelin

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-10 Thread Mark Bidewell
I was specifying them in the interpreter settings in the UI. On Thu, Oct 10, 2019 at 9:30 PM Jeff Zhang wrote: > How do you specify your spark interpreter dependencies ? You need to > specify it via property spark.jars or spark.jars.packages for non-local > model. > > Mark Bidewe

Re: Zeppelin 0.8.2 New Spark Interpreter

2019-10-10 Thread Jeff Zhang
How do you specify your spark interpreter dependencies ? You need to specify it via property spark.jars or spark.jars.packages for non-local model. Mark Bidewell 于2019年10月11日周五 上午3:45写道: > I am running some initial tests of Zeppelin 0.8.2 and I am seeing some > weird issues with depend

Zeppelin 0.8.2 New Spark Interpreter

2019-10-10 Thread Mark Bidewell
I am running some initial tests of Zeppelin 0.8.2 and I am seeing some weird issues with dependencies. When I use the old interpreter, everything works as expected. When I use the new interpreter, classes in my interpreter dependencies cannot be resolved when connecting to a master that is not lo

Re: spark interpreter "master" parameter always resets to yarn-client after restart zeppelin

2019-08-19 Thread Jeff Zhang
ark integration and the > “master” parameter in the spark interpreter configuration always resets its > value from “yarn” to “yarn-client” after zeppelin service reboot. > > > > How can I stop that? > > > > Thank you > > > NOTICE > Please consider the envir

Re: python virtual environment on spark interpreter

2019-08-19 Thread Jeff Zhang
t; I have a zeppelin installation connected to a Spark cluster. I setup > Zeppelin to submit jobs in yarn cluster mode and also impersonation is > enabled. Now I would like to be able to use a python virtual environment > instead of system one. > > Is there a way I could specify the python p

spark interpreter "master" parameter always resets to yarn-client after restart zeppelin

2019-08-19 Thread Manuel Sopena Ballesteros
Dear Zeppelin user community, I would like I a zeppelin installation with spark integration and the "master" parameter in the spark interpreter configuration always resets its value from "yarn" to "yarn-client" after zeppelin service reboot. How can I stop t

python virtual environment on spark interpreter

2019-08-19 Thread Manuel Sopena Ballesteros
specify the python parameter in the spark interpreter settings so is can point to specific folder use home folder (eg /home/{user_home}/python_virt_env/python) instead of a system one? If not how should I achieve what I want? Thank you Manuel NOTICE Please consider the environment before printing

Spark Interpreter failing to start: NumberFormat exception

2019-04-18 Thread Krentz
All - I am having an issue with a build I forked from master that is compiled as 0.9. We have another build running 0.8 that works just fine. The Spark interpreter is failing to start, and giving a NumberFormatException. It looks like when Zeppelin runs interpreter.sh, the

Re: Multi-line scripts in spark interpreter

2018-07-12 Thread Sanjay Dasgupta
> This behavior is coming from the new spark interpreter. Jeff opened > ZEPPELIN-3587 > to fix it. In the mean time you can use the old spark interpreter (set > zeppelin.spark.useNew > to false) to get around this. Hopefully you aren't dependent on the new > spark interprete

Re: Multi-line scripts in spark interpreter

2018-07-12 Thread Paul Brenner
This behavior is coming from the new spark interpreter. Jeff opened  ZEPPELIN-3587 to fix it. In the mean time you can use the old spark interpreter (set zeppelin.spark.useNew to false) to get around this. Hopefully you aren't dependent on the new spark interpreter. ( https://share.polyma

Multi-line scripts in spark interpreter

2018-07-12 Thread Christopher Piggott
Hi, This used to work: val a = new Something() .someMethod() .someMethod2() in 0.7.3 but it doesn't in 0.8.0 ... it says the .someMethod(), etc. are an illegal start of expression. Some of these setups I have are fluently expressed but would be unmanageable in a single long line. Is

Re: illegal start of definition with new spark interpreter

2018-07-05 Thread Jeff Zhang
This is due to different behavior of new spark interpreter, I have created ZEPPELIN-3587 and will fix it asap. Paul Brenner 于2018年7月6日周五 上午1:11写道: > Hi all, > > When I try switching over to the new spark interpreter it seems there is a > fundamental difference in how code is interp

illegal start of definition with new spark interpreter

2018-07-05 Thread Paul Brenner
Hi all, When I try switching over to the new spark interpreter it seems there is a fundamental difference in how code is interpreted? Maybe that shouldn't be a surprise, but I'm wondering if other people have experienced it and if there is any work around or hope for a change in

Re: Where Spark home is pick up in the new Spark interpreter

2018-06-06 Thread Jeff Zhang
It is picked up from interpreter setting. You can define SPARK_HOME in spark's interpreter setting page Anthony Corbacho 于2018年6月7日周四 上午11:50写道: > Hi, > > I am a bit confused where spark home is pick up in the new Spark > interpreter in the 0.8 branch? > > Regards, > Anthony >

Where Spark home is pick up in the new Spark interpreter

2018-06-06 Thread Anthony Corbacho
Hi, I am a bit confused where spark home is pick up in the new Spark interpreter in the 0.8 branch? Regards, Anthony

Re: Re: Spark Interpreter Tutorial in Apache Zeppelin

2018-05-31 Thread Paul Brenner
This is great! It inspired us to take another crack at using three of the newish features (new spark interpreter, yarn-cluster mode, and impersonation). If we run into problems and then find solutions we will share in case they are worth including in the documentation. ( https

Re: Spark Interpreter Tutorial in Apache Zeppelin

2018-05-31 Thread moon soo Lee
Thanks Jeff for sharing. It's very helpful. On Wed, May 30, 2018 at 8:47 PM Jeff Zhang wrote: > Hi Folks, > > > I often see users asking how to use spark interpreter in mail-list, > specially how to configure spark interpreter. So I wrote this article about > how to u

Spark Interpreter Tutorial in Apache Zeppelin

2018-05-30 Thread Jeff Zhang
Hi Folks, I often see users asking how to use spark interpreter in mail-list, specially how to configure spark interpreter. So I wrote this article about how to use spark interpreter in Apache Zeppelin (It is based on Zeppelin 0.8.0). But it is not completed yet, I will continue to add more

Re: Spark Interpreter error: 'not found: type'

2018-03-19 Thread Jeff Zhang
issues with import. >> >> >> Can you give me idea of how you are loading this jar datavec-api for >> zeppelin or spark-submit to access? >> >> >> Best >> >> Karan >> -- >> *From:* Marcus >> *Sent:* Saturday,

Re: Spark Interpreter error: 'not found: type'

2018-03-19 Thread Marcus
ubmit to access? > > > Best > > Karan > -- > *From:* Marcus > *Sent:* Saturday, March 10, 2018 10:43:25 AM > *To:* users@zeppelin.apache.org > *Subject:* Spark Interpreter error: 'not found: type' > > Hi, > > I am new

Re: Spark Interpreter error: 'not found: type'

2018-03-13 Thread Karan Sewani
ess? Best Karan From: Marcus Sent: Saturday, March 10, 2018 10:43:25 AM To: users@zeppelin.apache.org Subject: Spark Interpreter error: 'not found: type' Hi, I am new to Zeppelin and encountered a strange behavior. When copying my running scala-cod

Spark Interpreter error: 'not found: type'

2018-03-09 Thread Marcus
Hi, I am new to Zeppelin and encountered a strange behavior. When copying my running scala-code to a notebook, I've got errors from the spark interpreter, saying it could not find some types. Strangely the code worked, when I used the fqcn instead of the simple name. But since I want the cre

Re: Cannot define UDAF in %spark interpreter

2018-02-27 Thread Vannson, Raphael
-+--+---+ |some| secret | thing | here | 40| ++++--+---+ From: Paul Brenner Date: Tuesday, February 27, 2018 at 3:31 PM To: Raphael Vannson , "users@zeppelin.apache.org" Subject: Cannot define UDAF in %spark interpreter [https://share

Cannot define UDAF in %spark interpreter

2018-02-27 Thread Paul Brenner
sing the same code in spark-shell in > :paste mode works fine. > > Environment: > - Amazon EMR > - Apache Zeppelin Version 0.7.3 > - Spark version 2.2.1 > - Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_161) > > 1) Is there a way to configure the zeppelin %spark

Cannot define UDAF in %spark interpreter

2018-02-27 Thread Vannson, Raphael
the zeppelin %spark interpreter to do the equivalent of spark-shell's :paste mode? 2) If not, is there a workaround to be able to define UDAFs in Zeppelin's %spark interpreter? Thanks! Raphael ***PARAGRAPH INPUT:*** %spark import org.apache.spark.sql.function

Jar dependencies are not reloaded when Spark interpreter is restarted?

2018-02-22 Thread Partridge, Lucas (GE Aviation)
file system path of the jar; it’s not even prefixed with file:///. From: Jhon Anderson Cardenas Diaz [mailto:jhonderson2...@gmail.com] Sent: 22 February 2018 12:18 To: users@zeppelin.apache.org Subject: EXT: Re: Jar dependencies are not reloaded when Spark interpreter is restarted? When you say

Re: Jar dependencies are not reloaded when Spark interpreter is restarted?

2018-02-22 Thread Jhon Anderson Cardenas Diaz
;Partridge, Lucas (GE Aviation)" < lucas.partri...@ge.com> escribió: > I’m using Zeppelin 0.7.3 against a local standalone Spark ‘cluster’. I’ve > added a Scala jar dependency to my Spark interpreter using Zeppelin’s UI. I > thought if I changed my Scala code and updated the jar (u

Jar dependencies are not reloaded when Spark interpreter is restarted?

2018-02-22 Thread Partridge, Lucas (GE Aviation)
I'm using Zeppelin 0.7.3 against a local standalone Spark 'cluster'. I've added a Scala jar dependency to my Spark interpreter using Zeppelin's UI. I thought if I changed my Scala code and updated the jar (using sbt outside of Zeppelin) then all I'd have to do is r

Re: Custom Spark Interpreter?

2018-01-25 Thread Jeff Zhang
ld get spark ui url >> dynamically. >> >> >> >> ankit jain 于2018年1月25日周四 下午3:03写道: >> >>> That method is just reading it from a config defined in interpreter >>> settings called "uiWebUrl" which makes it configurable but still static.

Re: Custom Spark Interpreter?

2018-01-25 Thread Nick Moeckel
I am beginning work on extending the SparkInterpreter class right now- I would be interested to hear more details about why this idea is not straightforward. Thanks, Nick -- Sent from: http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/

Re: Custom Spark Interpreter?

2018-01-25 Thread ankit jain
method is just reading it from a config defined in interpreter >> settings called "uiWebUrl" which makes it configurable but still static. >> >> On Wed, Jan 24, 2018 at 10:58 PM, Jeff Zhang wrote: >> >>> >>> IIRC, spark interpreter can get web ui url

Re: Custom Spark Interpreter?

2018-01-24 Thread Jeff Zhang
gt; On Wed, Jan 24, 2018 at 10:58 PM, Jeff Zhang wrote: > >> >> IIRC, spark interpreter can get web ui url at runtime instead of static >> url. >> >> >> https://github.com/apache/zeppelin/blob/master/spark/src/main/java/org/apache/zeppelin/spark/Spa

Re: Custom Spark Interpreter?

2018-01-24 Thread ankit jain
That method is just reading it from a config defined in interpreter settings called "uiWebUrl" which makes it configurable but still static. On Wed, Jan 24, 2018 at 10:58 PM, Jeff Zhang wrote: > > IIRC, spark interpreter can get web ui url at runtime instead of static

Re: Custom Spark Interpreter?

2018-01-24 Thread Jeff Zhang
IIRC, spark interpreter can get web ui url at runtime instead of static url. https://github.com/apache/zeppelin/blob/master/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L940 ankit jain 于2018年1月25日周四 下午2:55写道: > Issue with Spark UI when running on AWS EMR is it requi

Re: Custom Spark Interpreter?

2018-01-24 Thread ankit jain
in my head. I guess we could also modify Zeppelin restart script to kill those rogue processes and make sure 4040 is always available? Thanks Ankit On Wed, Jan 24, 2018 at 6:10 PM, Jeff Zhang wrote: > > If Spark interpreter didn't give you the correct spark UI, this should be > a bu

Re: Custom Spark Interpreter?

2018-01-24 Thread Jeff Zhang
If Spark interpreter didn't give you the correct spark UI, this should be a bug, you can file a ticket to fix it. Although you can make a custom interpreter by extending the current spark interpreter, it is not a trivial work. ankit jain 于2018年1月25日周四 上午8:07写道: > Hi fellow Zeppelin user

Custom Spark Interpreter?

2018-01-24 Thread ankit jain
Hi fellow Zeppelin users, Has anyone tried to write a custom Spark Interpreter perhaps extending from the one that ships currently with zeppelin - spark/src/main/java/org/ apache/zeppelin/spark/*SparkInterpreter.java?* We are coming across cases where we need the interpreter to do "more

Re: How does user user jar conflict resolved in spark interpreter?

2017-11-15 Thread Jeff Zhang
about dependencies users are using while running > notebooks using spark interpreter. > > Imagine I have configured spark intepreter. > > Two users write their spark notebooks. > the first user does > > z.load("com:best-it-company:0.1") > > > the second one u

How does user user jar conflict resolved in spark interpreter?

2017-11-15 Thread Serega Sheypak
Hi zeppelin users! I have the question about dependencies users are using while running notebooks using spark interpreter. Imagine I have configured spark intepreter. Two users write their spark notebooks. the first user does z.load("com:best-it-company:0.1") the second one user a

Re: Configure spark interpreter setting from environment variables

2017-09-27 Thread benoitdr
That is working. Thanks a lot

Re: Configure spark interpreter setting from environment variables

2017-09-27 Thread Jeff Zhang
unfortunately it is packaged in the spark interpreter jar. but you can get it from source code. Benoit Drooghaag 于2017年9月27日周三 下午5:11写道: > Thanks for your quick feedback. > There is no "interpreter-setting.json" in zeppelin-0.7.3-bin-all.tgz. > Can you tell me more ? > I&

Re: Configure spark interpreter setting from environment variables

2017-09-27 Thread Benoit Drooghaag
September 2017 at 10:56, Jeff Zhang wrote: > > Set interpreter setting is one time effort, it should not be inconvenient > for users. But if you are a zeppelin vender and want to customize zeppelin, > you can edit interpreter-setting.json of spark interpreter and copy it into > $Z

Re: Configure spark interpreter setting from environment variables

2017-09-27 Thread Jeff Zhang
Set interpreter setting is one time effort, it should not be inconvenient for users. But if you are a zeppelin vender and want to customize zeppelin, you can edit interpreter-setting.json of spark interpreter and copy it into $ZEPPELIN_HOME/interpreter/spark benoit.droogh...@gmail.com 于2017年9月27

Configure spark interpreter setting from environment variables

2017-09-27 Thread benoit.droogh...@gmail.com
Hi all, Is there a way to configure arbitrary spark interpreter settings via environment variables ? For example, I'd like to set the "spark.ui.reverseProxy" setting to "true". For the moment, I can only do it manually via the Zeppelin UI, that is working as expected.

Re: Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Jeff Zhang
fect that interpreter. > > And all the capitalized property name would be taken as env variable. > > Serega Sheypak 于2017年7月1日周六 上午3:20写道: > >> hi, thanks for your reply. How should I set this variable? >> I'm looking at Spark interpreter config UI. It doesn'

Re: Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Jeff Zhang
ble? > I'm looking at Spark interpreter config UI. It doesn't allow me to set env > variable. > > https://zeppelin.apache.org/docs/latest/interpreter/spark.html#1-export-spark_home > tells that HADOOP_CONF_DIR should be set once per whole Zeppelin instance. > > What do I mi

Re: Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Serega Sheypak
hi, thanks for your reply. How should I set this variable? I'm looking at Spark interpreter config UI. It doesn't allow me to set env variable. https://zeppelin.apache.org/docs/latest/interpreter/spark.html#1-export-spark_home tells that HADOOP_CONF_DIR should be set once per whol

Re: Configuring Zeppelin spark interpreter to work with different hadoop clusters

2017-06-30 Thread Jeff Zhang
Right, create three spark interpreters for your 3 yarn cluster. Serega Sheypak 于2017年6月30日周五 下午10:33写道: > Hi, thanks for your reply! > What do you mean by that? > I can have only one env variable HADOOP_CONF_DIR... > And how can user pick which env to run? > > Or you mean I have to create three

  1   2   >