Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
preter.java#L152 > > > On Mon, Mar 20, 2017 at 12:21 PM William Markito Oliveira < > william.mark...@gmail.com> wrote: > >> Thanks for the quick response Ruslan. >> >> But given that it's an environment variable, I can't quickly change that >> value and p

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
Ah! Thanks Ruslan! I'm still using 0.7.0 - Let me update to 0.8.0 and I'll come back update this thread with the results. On Mon, Mar 20, 2017 at 3:10 PM, William Markito Oliveira < william.mark...@gmail.com> wrote: > Hi moon, thanks for the tip. Here to summarize my current

Re: Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
he.org/jira/browse/ZEPPELIN-1265 > > Eventually, i think we can remove zeppelin.pyspark.python and use only > PYSPARK_PYTHON instead to avoid confusion. > > > -- > Ruslan Dautkhanov > > On Mon, Mar 20, 2017 at 12:59 PM, William Markito Oliveira < > mark...@apache.org>

Should zeppelin.pyspark.python be used on the worker nodes ?

2017-03-20 Thread William Markito Oliveira
I'm trying to use zeppelin.pyspark.python as the variable to set the python that Spark worker nodes should use for my job, but it doesn't seem to be working. Am I missing something or this variable does not do that ? My goal is to change that variable to point to different conda environments.

PySpark: Dependencies and scripts through Zeppelin

2017-02-23 Thread William Markito Oliveira
What's the right way to package scripts and distribute dependencies (conda environments) with Zeppelin ? I'm currently using the variable zeppelin.pyspark.python from the Spark interpreter set to my conda environment python, but still when I submit the jobs and they start executing I do get "No