Re: switching between python2 and 3 for %pyspark

2018-10-26 Thread Jeff Zhang
IIUC, You change the source code spark interpreter by adding another
py3spark interpreter, is that right ?

>>> Nevertheless, zeppelin_ipythonxxx/ipython_server.py
>>> seems catching environment variable from zeppelin-env.sh and not from
interpreter settings.
zeppelin will read env from both zeppelin-env and interpreter setting, and
env in interpreter setting should be able to overwrite that defined in
zeppelin-env. If not, then it is might be a bug.

Ruslan Dautkhanov 于2018年10月27日周六 上午11:30写道:

> Thanks Jeff , yep that's what we have too in [1]  - that's what we have
> currently in interpreter settings now.
> It doesn't work for some reason.
> We're running Zeppelin from ~May'18 snapshot - has anything changed since
> then?
>
>
> Ruslan
>
>
>
>
> [1]
>
> LD_LIBRARY_PATH  /opt/cloudera/parcels/Anaconda3/lib
> PATH
> /usr/java/latest/bin:/opt/cloudera/parcels/Anaconda3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rdautkha/bin
> PYSPARK_DRIVER_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
> PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
> PYTHONHOME  /opt/cloudera/parcels/Anaconda3
>
> spark.executorEnv.LD_LIBRARY_PATH/  opt/cloudera/parcels/Anaconda3/lib
> spark.executorEnv.PYSPARK_PYTHON
> /opt/cloudera/parcels/Anaconda3/bin/python
> spark.pyspark.driver.python  /opt/cloudera/parcels/Anaconda3/bin/python
> spark.pyspark.python  /opt/cloudera/parcels/Anaconda3/bin/python
> spark.yarn.appMasterEnv.PYSPARK_PYTHON
> /opt/cloudera/parcels/Anaconda3/bin/python
>
>
> --
> Ruslan Dautkhanov
>
>
> On Fri, Oct 26, 2018 at 9:10 PM Jeff Zhang  wrote:
>
>> Hi Ruslan,
>>
>> I believe you can just set PYSPARK_PYTHON in spark interpreter setting
>> to switch between python2 and python3
>>
>>
>>
>> Ruslan Dautkhanov 于2018年10月27日周六 上午2:26写道:
>>
>>> I'd like to give users ability to switch between Python2 and Python3 for
>>> their PySpark jobs.
>>> Was somebody able to set up something like this, so they can switch
>>> between python2 and python3 pyspark interpreters?
>>>
>>> For this experiment, created a new %py3spark interpreter, assigned to
>>> spark interpreter group.
>>>
>>> Added following options there for %py3spark: [1]
>>> /opt/cloudera/parcels/Anaconda3 is our Anaconda python3 home that's
>>> available on all worker nodes and on zeppelin server too.
>>>
>>> For default %pyspark interpreter it's very similar to [1], except all
>>> paths have "/opt/cloudera/parcels/Anaconda" instead of "
>>> /opt/cloudera/parcels/Anaconda3".
>>>
>>> Nevertheless, zeppelin_ipythonxxx/ipython_server.py
>>> seems catching environment variable from zeppelin-env.sh and not from
>>> interpreter settings.
>>>
>>> Zeppelin documentation reads that all uppercase variables will be
>>> treated as environment variables, so I assume it should overwrite what's
>>> in zeppelin-env.sh, no?
>>>
>>> It seems environment variables at interpreter level are broken - notice
>>> "pyspark" paragraph has "Anaconda3" and not "Anaconda" in PATH
>>> (highlighted).
>>>
>>> [image: image.png]
>>>
>>>
>>>
>>> [1]
>>>
>>> LD_LIBRARY_PATH  /opt/cloudera/parcels/Anaconda3/lib
>>> PATH
>>> /usr/java/latest/bin:/opt/cloudera/parcels/Anaconda3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rdautkha/bin
>>> PYSPARK_DRIVER_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
>>> PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
>>> PYTHONHOME  /opt/cloudera/parcels/Anaconda3
>>>
>>> spark.executorEnv.LD_LIBRARY_PATH/  opt/cloudera/parcels/Anaconda3/lib
>>> spark.executorEnv.PYSPARK_PYTHON
>>> /opt/cloudera/parcels/Anaconda3/bin/python
>>> spark.pyspark.driver.python  /opt/cloudera/parcels/Anaconda3/bin/python
>>> spark.pyspark.python  /opt/cloudera/parcels/Anaconda3/bin/python
>>> spark.yarn.appMasterEnv.PYSPARK_PYTHON
>>> /opt/cloudera/parcels/Anaconda3/bin/python
>>>
>>> --
>>> Ruslan Dautkhanov
>>>
>>


Re: switching between python2 and 3 for %pyspark

2018-10-26 Thread Ruslan Dautkhanov
Thanks Jeff , yep that's what we have too in [1]  - that's what we have
currently in interpreter settings now.
It doesn't work for some reason.
We're running Zeppelin from ~May'18 snapshot - has anything changed since
then?


Ruslan




[1]

LD_LIBRARY_PATH  /opt/cloudera/parcels/Anaconda3/lib
PATH
/usr/java/latest/bin:/opt/cloudera/parcels/Anaconda3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rdautkha/bin
PYSPARK_DRIVER_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
PYTHONHOME  /opt/cloudera/parcels/Anaconda3

spark.executorEnv.LD_LIBRARY_PATH/  opt/cloudera/parcels/Anaconda3/lib
spark.executorEnv.PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
spark.pyspark.driver.python  /opt/cloudera/parcels/Anaconda3/bin/python
spark.pyspark.python  /opt/cloudera/parcels/Anaconda3/bin/python
spark.yarn.appMasterEnv.PYSPARK_PYTHON
/opt/cloudera/parcels/Anaconda3/bin/python


-- 
Ruslan Dautkhanov


On Fri, Oct 26, 2018 at 9:10 PM Jeff Zhang  wrote:

> Hi Ruslan,
>
> I believe you can just set PYSPARK_PYTHON in spark interpreter setting to
> switch between python2 and python3
>
>
>
> Ruslan Dautkhanov 于2018年10月27日周六 上午2:26写道:
>
>> I'd like to give users ability to switch between Python2 and Python3 for
>> their PySpark jobs.
>> Was somebody able to set up something like this, so they can switch
>> between python2 and python3 pyspark interpreters?
>>
>> For this experiment, created a new %py3spark interpreter, assigned to
>> spark interpreter group.
>>
>> Added following options there for %py3spark: [1]
>> /opt/cloudera/parcels/Anaconda3 is our Anaconda python3 home that's
>> available on all worker nodes and on zeppelin server too.
>>
>> For default %pyspark interpreter it's very similar to [1], except all
>> paths have "/opt/cloudera/parcels/Anaconda" instead of "
>> /opt/cloudera/parcels/Anaconda3".
>>
>> Nevertheless, zeppelin_ipythonxxx/ipython_server.py
>> seems catching environment variable from zeppelin-env.sh and not from
>> interpreter settings.
>>
>> Zeppelin documentation reads that all uppercase variables will be
>> treated as environment variables, so I assume it should overwrite what's
>> in zeppelin-env.sh, no?
>>
>> It seems environment variables at interpreter level are broken - notice
>> "pyspark" paragraph has "Anaconda3" and not "Anaconda" in PATH
>> (highlighted).
>>
>> [image: image.png]
>>
>>
>>
>> [1]
>>
>> LD_LIBRARY_PATH  /opt/cloudera/parcels/Anaconda3/lib
>> PATH
>> /usr/java/latest/bin:/opt/cloudera/parcels/Anaconda3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rdautkha/bin
>> PYSPARK_DRIVER_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
>> PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
>> PYTHONHOME  /opt/cloudera/parcels/Anaconda3
>>
>> spark.executorEnv.LD_LIBRARY_PATH/  opt/cloudera/parcels/Anaconda3/lib
>> spark.executorEnv.PYSPARK_PYTHON
>> /opt/cloudera/parcels/Anaconda3/bin/python
>> spark.pyspark.driver.python  /opt/cloudera/parcels/Anaconda3/bin/python
>> spark.pyspark.python  /opt/cloudera/parcels/Anaconda3/bin/python
>> spark.yarn.appMasterEnv.PYSPARK_PYTHON
>> /opt/cloudera/parcels/Anaconda3/bin/python
>>
>> --
>> Ruslan Dautkhanov
>>
>


Re: switching between python2 and 3 for %pyspark

2018-10-26 Thread Jeff Zhang
Hi Ruslan,

I believe you can just set PYSPARK_PYTHON in spark interpreter setting to
switch between python2 and python3



Ruslan Dautkhanov 于2018年10月27日周六 上午2:26写道:

> I'd like to give users ability to switch between Python2 and Python3 for
> their PySpark jobs.
> Was somebody able to set up something like this, so they can switch
> between python2 and python3 pyspark interpreters?
>
> For this experiment, created a new %py3spark interpreter, assigned to
> spark interpreter group.
>
> Added following options there for %py3spark: [1]
> /opt/cloudera/parcels/Anaconda3 is our Anaconda python3 home that's
> available on all worker nodes and on zeppelin server too.
>
> For default %pyspark interpreter it's very similar to [1], except all
> paths have "/opt/cloudera/parcels/Anaconda" instead of "
> /opt/cloudera/parcels/Anaconda3".
>
> Nevertheless, zeppelin_ipythonxxx/ipython_server.py
> seems catching environment variable from zeppelin-env.sh and not from
> interpreter settings.
>
> Zeppelin documentation reads that all uppercase variables will be
> treated as environment variables, so I assume it should overwrite what's
> in zeppelin-env.sh, no?
>
> It seems environment variables at interpreter level are broken - notice
> "pyspark" paragraph has "Anaconda3" and not "Anaconda" in PATH
> (highlighted).
>
> [image: image.png]
>
>
>
> [1]
>
> LD_LIBRARY_PATH  /opt/cloudera/parcels/Anaconda3/lib
> PATH
> /usr/java/latest/bin:/opt/cloudera/parcels/Anaconda3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rdautkha/bin
> PYSPARK_DRIVER_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
> PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
> PYTHONHOME  /opt/cloudera/parcels/Anaconda3
>
> spark.executorEnv.LD_LIBRARY_PATH/  opt/cloudera/parcels/Anaconda3/lib
> spark.executorEnv.PYSPARK_PYTHON
> /opt/cloudera/parcels/Anaconda3/bin/python
> spark.pyspark.driver.python  /opt/cloudera/parcels/Anaconda3/bin/python
> spark.pyspark.python  /opt/cloudera/parcels/Anaconda3/bin/python
> spark.yarn.appMasterEnv.PYSPARK_PYTHON
> /opt/cloudera/parcels/Anaconda3/bin/python
>
> --
> Ruslan Dautkhanov
>


switching between python2 and 3 for %pyspark

2018-10-26 Thread Ruslan Dautkhanov
I'd like to give users ability to switch between Python2 and Python3 for
their PySpark jobs.
Was somebody able to set up something like this, so they can switch between
python2 and python3 pyspark interpreters?

For this experiment, created a new %py3spark interpreter, assigned to spark
interpreter group.

Added following options there for %py3spark: [1]
/opt/cloudera/parcels/Anaconda3 is our Anaconda python3 home that's
available on all worker nodes and on zeppelin server too.

For default %pyspark interpreter it's very similar to [1], except all paths
have "/opt/cloudera/parcels/Anaconda" instead of "
/opt/cloudera/parcels/Anaconda3".

Nevertheless, zeppelin_ipythonxxx/ipython_server.py
seems catching environment variable from zeppelin-env.sh and not from
interpreter settings.

Zeppelin documentation reads that all uppercase variables will be
treated as environment variables, so I assume it should overwrite what's in
zeppelin-env.sh, no?

It seems environment variables at interpreter level are broken - notice
"pyspark" paragraph has "Anaconda3" and not "Anaconda" in PATH
(highlighted).

[image: image.png]



[1]

LD_LIBRARY_PATH  /opt/cloudera/parcels/Anaconda3/lib
PATH
/usr/java/latest/bin:/opt/cloudera/parcels/Anaconda3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rdautkha/bin
PYSPARK_DRIVER_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
PYTHONHOME  /opt/cloudera/parcels/Anaconda3

spark.executorEnv.LD_LIBRARY_PATH/  opt/cloudera/parcels/Anaconda3/lib
spark.executorEnv.PYSPARK_PYTHON  /opt/cloudera/parcels/Anaconda3/bin/python
spark.pyspark.driver.python  /opt/cloudera/parcels/Anaconda3/bin/python
spark.pyspark.python  /opt/cloudera/parcels/Anaconda3/bin/python
spark.yarn.appMasterEnv.PYSPARK_PYTHON
/opt/cloudera/parcels/Anaconda3/bin/python

-- 
Ruslan Dautkhanov


Re: How to make livy2.spark find jar

2018-10-26 Thread Ruslan Dautkhanov
Try adding ZEPPELIN_INTP_CLASSPATH_OVERRIDES, for example,

export
ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/conf:/var/lib/sqoop/ojdbc7.jar


-- 
Ruslan Dautkhanov


On Tue, Oct 23, 2018 at 9:40 PM Lian Jiang  wrote:

> Hi,
>
> I am trying to use oracle jdbc to read oracle database table. I have added
> below property in custom zeppelin-env:
>
> SPARK_SUBMIT_OPTIONS="--jars /my/path/to/ojdbc8.jar"
>
> But
>
> val df = spark.read.format("jdbc").option("url", "jdbc:oracle:thin:@
> (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.9.44.99)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=
> myservice.mydns.com)))").option("user","myuser").option("password","mypassword").option("driver",
> "oracle.jdbc.driver.OracleDriver").option("dbtable",
> "myuser.mytable").load()
>
> throws:
>
> java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver at
> scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at
> org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
> at
> org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)
> at
> org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)
> at scala.Option.foreach(Option.scala:257) at
> org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:79)
> at
> org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:35)
> at
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:34)
> at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
> at
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227) at
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
>
> How to make livy2.spark interpreter find ojdbc8.jar? Thanks.
>
>


Re: Available and custom roles

2018-10-26 Thread Spico Florin
Hello!
 Thank you for your responses. Is still not clear for me, how to add
different (zeppelin actions?) with the help of the roles?
In the example provided by the liuxun there is no difference between the
two roles. Both of them have *.
If I'not using LDAP just basic shiro configuration what could be other
options?
Thanks.
 Florin

On Fri, Oct 26, 2018 at 3:49 PM Fawze Abujaber  wrote:

> What others choices can be used for * in the roles?
>
> I configured zeppelin to work with AD and yes i'm able to differeniate
> between the 2 groups in the ADrolegroupmap?
>
> For example i have 2 groups zeppelin_admins and zeppelin_members.
>
> And when keeping url section as is, then admins will have accrss the
> mentioned urls and members not, but how i can disabled other users to
> authinticate at all.
>
> for now our AD user are able to authnticate and Access the UI.
>
> On Fri, Oct 26, 2018 at 3:44 PM liuxun  wrote:
>
>> You can refer to the following configuration:
>>
>> [users]
>> # List of users with their password allowed to access Zeppelin.
>> # To use a different strategy (LDAP / Database / ...) check the shiro doc
>> at http://shiro.apache.org/configuration.html#Configuration-INISections
>> # To enable admin user, uncomment the following line and set an
>> appropriate password.
>> admin = password1, admin
>> user1 = password1, bi
>> user2 = password2, bi
>> user3 = password3, bi
>>
>>
>> [roles]
>> bi = *
>> admin = *
>>
>> [urls]
>> # This section is used for url-based security.
>> # You can secure interpreter, configuration and credential information by
>> urls. Comment or uncomment the below urls that you want to hide.
>> # anon means the access is anonymous.
>> # authc means Form based Auth Security
>> # To enfore security, comment the line below and uncomment the next one
>> /api/version = anon
>> /api/openid/* = anon
>> /api/interpreter/** = authc, roles[admin]
>> /api/configurations/** = authc, roles[admin]
>> /api/credential/** = authc, roles[admin]
>>
>>
>> 在 2018年10月26日,下午7:40,Spico Florin  写道:
>>
>> Hello!
>>
>> I would like to know what are the available roles in Zeppelin (besides
>> admin that has *).
>> How can I create/define my own roles based on the actions that an user is
>> allowed.
>> In the shiro.ini the examples are to generic, having role1, role2 all
>> action allowed *.
>>
>> Can you please define the fine grained action that I can add in arole?
>>
>> I look forward for your answers.
>> Best regards,
>>  Florin
>>
>>
>>
>
> --
> Take Care
> Fawze Abujaber
>


Re: Available and custom roles

2018-10-26 Thread Fawze Abujaber
What others choices can be used for * in the roles?

I configured zeppelin to work with AD and yes i'm able to differeniate
between the 2 groups in the ADrolegroupmap?

For example i have 2 groups zeppelin_admins and zeppelin_members.

And when keeping url section as is, then admins will have accrss the
mentioned urls and members not, but how i can disabled other users to
authinticate at all.

for now our AD user are able to authnticate and Access the UI.

On Fri, Oct 26, 2018 at 3:44 PM liuxun  wrote:

> You can refer to the following configuration:
>
> [users]
> # List of users with their password allowed to access Zeppelin.
> # To use a different strategy (LDAP / Database / ...) check the shiro doc
> at http://shiro.apache.org/configuration.html#Configuration-INISections
> # To enable admin user, uncomment the following line and set an
> appropriate password.
> admin = password1, admin
> user1 = password1, bi
> user2 = password2, bi
> user3 = password3, bi
>
>
> [roles]
> bi = *
> admin = *
>
> [urls]
> # This section is used for url-based security.
> # You can secure interpreter, configuration and credential information by
> urls. Comment or uncomment the below urls that you want to hide.
> # anon means the access is anonymous.
> # authc means Form based Auth Security
> # To enfore security, comment the line below and uncomment the next one
> /api/version = anon
> /api/openid/* = anon
> /api/interpreter/** = authc, roles[admin]
> /api/configurations/** = authc, roles[admin]
> /api/credential/** = authc, roles[admin]
>
>
> 在 2018年10月26日,下午7:40,Spico Florin  写道:
>
> Hello!
>
> I would like to know what are the available roles in Zeppelin (besides
> admin that has *).
> How can I create/define my own roles based on the actions that an user is
> allowed.
> In the shiro.ini the examples are to generic, having role1, role2 all
> action allowed *.
>
> Can you please define the fine grained action that I can add in arole?
>
> I look forward for your answers.
> Best regards,
>  Florin
>
>
>

-- 
Take Care
Fawze Abujaber


Re: Available and custom roles

2018-10-26 Thread liuxun
You can refer to the following configuration:

[users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at 
http://shiro.apache.org/configuration.html#Configuration-INISections 

# To enable admin user, uncomment the following line and set an appropriate 
password.
admin = password1, admin
user1 = password1, bi
user2 = password2, bi
user3 = password3, bi


[roles]
bi = *
admin = *

[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls. 
Comment or uncomment the below urls that you want to hide.
# anon means the access is anonymous.
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
/api/openid/* = anon
/api/interpreter/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]


> 在 2018年10月26日,下午7:40,Spico Florin  写道:
> 
> Hello!
> 
> I would like to know what are the available roles in Zeppelin (besides admin 
> that has *).
> How can I create/define my own roles based on the actions that an user is 
> allowed.
> In the shiro.ini the examples are to generic, having role1, role2 all action 
> allowed *.
> 
> Can you please define the fine grained action that I can add in arole?
> 
> I look forward for your answers.
> Best regards,
>  Florin 



Question About Zeppelin/Git...

2018-10-26 Thread Partridge, Lucas (GE Aviation)
Hi Marcelo,
Please see https://zeppelin.apache.org/community.html . That links to the issue 
tracker on JIRA: https://issues.apache.org/jira/projects/ZEPPELIN
Thanks, Lucas.

From: Marcelo Marques 
Sent: 26 October 2018 12:37
To: users@zeppelin.apache.org
Subject: EXT: Re: Question About Zeppelin/Git...

Hi Jeff,

Tanks for the update. You mean, another place? Could you please give me more 
details? Thank you for the information!!

Marcelo Marques


On Thu, Oct 25, 2018 at 11:48 PM Jeff Zhang 
mailto:zjf...@gmail.com>> wrote:

This seems a bug in GitNotebookRepo, please file a ticket for that.


Marcelo Marques mailto:estudi...@gmail.com>>于2018年10月26日周五 
上午1:41写道:
Hey!!

First, sorry for the newbie subject. Starting with Zeppelin / GIT / ...

I´m starting the configuration to work with Bitbucket and looks like it´s 
working, I can commit, so looks fine. The question is... when I commit from 
Zeppelin, the user that sent the commit it´s always the user that started the 
zeppelin daemon (application user), not the logged user. Can you give me some 
tips where to look on this? I mean, when we press commit, the logged user 
should be the one who send the commit, right?

Thanks in advance! :)

Marcelo Marques


Available and custom roles

2018-10-26 Thread Spico Florin
Hello!

I would like to know what are the available roles in Zeppelin (besides
admin that has *).
How can I create/define my own roles based on the actions that an user is
allowed.
In the shiro.ini the examples are to generic, having role1, role2 all
action allowed *.

Can you please define the fine grained action that I can add in arole?

I look forward for your answers.
Best regards,
 Florin


Re: Question About Zeppelin/Git...

2018-10-26 Thread Marcelo Marques
Hi Jeff,

Tanks for the update. You mean, another place? Could you please give me
more details? Thank you for the information!!

*Marcelo Marques*


On Thu, Oct 25, 2018 at 11:48 PM Jeff Zhang  wrote:

>
> This seems a bug in GitNotebookRepo, please file a ticket for that.
>
>
> Marcelo Marques 于2018年10月26日周五 上午1:41写道:
>
>> Hey!!
>>
>> First, sorry for the newbie subject. Starting with Zeppelin / GIT / ...
>>
>> I´m starting the configuration to work with Bitbucket and looks like it´s
>> working, I can commit, so looks fine. The question is... when I commit from
>> Zeppelin, the user that sent the commit it´s always the user that started
>> the zeppelin daemon (application user), not the logged user. Can you give
>> me some tips where to look on this? I mean, when we press commit, the
>> logged user should be the one who send the commit, right?
>>
>> Thanks in advance! :)
>>
>> *Marcelo Marques*
>>
>