When property key in interpreter configuration screen matches certain
condition [1], it'll be treated as a environment variable.

You can remove PYSPARK_PYTHON from conf/zeppelin-env.sh and place it in
interpreter configuration.

Thanks,
moon

[1]
https://github.com/apache/zeppelin/blob/master/zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoteInterpreter.java#L152


On Mon, Mar 20, 2017 at 12:21 PM William Markito Oliveira <
william.mark...@gmail.com> wrote:

> Thanks for the quick response Ruslan.
>
> But given that it's an environment variable, I can't quickly change that
> value and point to a different python environment without restarting the
> Zeppelin process, can I ? I mean is there a way to set the value for
> PYSPARK_PYTHON from the Interpreter configuration screen ?
>
> Thanks,
>
>
> On Mon, Mar 20, 2017 at 2:15 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
> wrote:
>
> You can set PYSPARK_PYTHON environment variable for that.
>
> Not sure about zeppelin.pyspark.python. I think it does not work
> See comments in https://issues.apache.org/jira/browse/ZEPPELIN-1265
>
> Eventually, i think we can remove zeppelin.pyspark.python and use only
> PYSPARK_PYTHON instead to avoid confusion.
>
>
> --
> Ruslan Dautkhanov
>
> On Mon, Mar 20, 2017 at 12:59 PM, William Markito Oliveira <
> mark...@apache.org> wrote:
>
> I'm trying to use zeppelin.pyspark.python as the variable to set the
> python that Spark worker nodes should use for my job, but it doesn't seem
> to be working.
>
> Am I missing something or this variable does not do that ?
>
> My goal is to change that variable to point to different conda
> environments.  These environments are available in all worker nodes since
> it's on a shared location and ideally all nodes then would have access to
> the same libraries and dependencies.
>
> Thanks,
>
> ~/William
>
>
>
>
>
> --
> ~/William
>

Reply via email to