RE: Zeppelin and Java 8

2017-05-03 Thread Polina Marasanova
Thanks for your quick response. We haven't upgraded yet to 0.7.1 but we plan it 
in nearest future. 
Possibly JAVA_HOME parameter should be changed here 
https://zeppelin.apache.org/docs/0.7.1/install/install.html

Cheers,
Polina

From: Jun Kim [i2r@gmail.com]
Sent: Thursday, 4 May 2017 12:37 PM
To: users@zeppelin.apache.org
Subject: Re: Zeppelin and Java 8

Hi! Zeppelin 0.7.1 is built with Java 8. And it throws an exception if you run 
it with Java 7.

You can see the related issue here:
https://issues.apache.org/jira/browse/ZEPPELIN-2405

Thanks :)

2017년 5월 4일 (목) 오전 9:57, Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>>님이 
작성:
Hi everyone,

We really need to have Java 8 in our Zeppelin. I've seen a pull request about a 
year ago to upgrade Java version. It was closed as not urgent.
Are there any updates regarding this matter?

How dangerous it could be if I change the version to 1.8 in a pom.xml myself 
and rebuild it?

Cheers,
Polina
--
Taejun Kim

Data Mining Lab.
School of Electrical and Computer Engineering
University of Seoul


Zeppelin and Java 8

2017-05-03 Thread Polina Marasanova
Hi everyone,

We really need to have Java 8 in our Zeppelin. I've seen a pull request about a 
year ago to upgrade Java version. It was closed as not urgent.
Are there any updates regarding this matter?

How dangerous it could be if I change the version to 1.8 in a pom.xml myself 
and rebuild it?

Cheers,
Polina


RE: Config for spark interpreter

2016-09-07 Thread Polina Marasanova
Hi,

We found a way how to solve this issue. It's hacky but probably give you some 
ideas how to fix it.

Steps:
1. Unpack zeppelin-spark-0.6.0.jar inside zeppelin instance
2. Overwrite interpreter-setting.json
3. Repack jar
4. Restart Zeppelin

Regards,
Polina


From: Polina Marasanova [polina.marasan...@quantium.com.au]
Sent: Thursday, 8 September 2016 1:46 PM
To: users@zeppelin.apache.org
Subject: RE: Config for spark interpreter

Hi Mina,

Thank you for your response.
I double checked approach 1 and 3, still no luck.
Probably the point is that I'm not using default Zeppelin binary 
zeppelin-0.6.0-bin-netinst.tgz because it doesn't fit our cluster architecture 
(we use MapR 5.1). I built it from source code for version 0.6.0 you provide 
here https://zeppelin.apache.org/download.html using parameters:
mvn clean package -Pspark-1.6 -Pmapr51 -Pyarn -DskipTests -DproxySet=true 
-DproxyHost=192.168.16.1 -DproxyPort=8080

And could you please clarify for me this point: "Here is one suspicious part 
you might missed, did you create new Spark Interpreter after restarting 
zeppelin?"
I dont create any new interpreters. I just want to use default one with my 
custom settings.

Regards,
Polina


From: mina lee [mina...@apache.org]
Sent: Friday, 12 August 2016 2:08 PM
To: users@zeppelin.apache.org
Subject: Re: Config for spark interpreter

Hi Polina,

I tried first/third approach with zeppelin-0.6.0-bin-netinst.tgz and both seems 
to work for me.

> 3. restart zeppelin, check interpreter tab.
Here is one suspicious part you might missed, did you create new Spark 
Interpreter after restarting zeppelin?

One thing you need to keep in mind is that properties that is defined in 
interpreter/spark/interpreter-setting.json will take priority over 
spark-default.conf.
To be more specific, you will need to change 
interpreter/spark/interpreter-setting.json file if you want to change one of 
below properties:

  *   spark.app.name<http://spark.app.name>
  *   spark.executor.memory
  *   spark.cores.max
  *   spark.master

The properties other than above, you can configure them in spark-default.conf.

Regards,
Mina

On Tue, Aug 9, 2016 at 8:12 AM Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>> 
wrote:
Hi,

I think I faced a bug of a Zeppelin 0.6.0.
What happened: I'm not able to overwrite spark interpreter properties from 
config, only via GUI.
What I've tried:

first approach, worked on previous version
1. export SPARK_CONF_DIR=/zeppelin/conf/spark
2. add my "spark-defaults.conf" file to this directory
3. restart zeppelin, check interpreter tab.
doesn't work

second approach:
1. add "spark-defaults.conf" to ZEPPELIN_CONF_DIR
2. pass this directory to starting script /bin/zeppelin.sh --config 
ZEPPELIN_CONF_DIR
doesn't work

third approach:
1. Create a file interpreter-setting.json with my configs similar to this one 
https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/interpreter-setting.json
2. add to directory /zeppelin/interpreter/spark
3. restart zeppelin
according to logs looks like zeppelin actually fetching this file, but still no 
changes in GUI

if i add all properties via GUI it works ok, but it means that users should do 
it by themselves which is a bit of a pain

Have anyone seen something similar to my issue?

Thank you.

Regards,
Polina

From: ndj...@gmail.com<mailto:ndj...@gmail.com> 
[ndj...@gmail.com<mailto:ndj...@gmail.com>]
Sent: Monday, 8 August 2016 3:41 PM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Config for spark interpreter

Hi Polina,

You can just define the SPARK_HOME one the conf/zeppelin-Envoyez.sh and get rid 
of any other Spark configuration from that file, otherwise Zeppelin will just 
overwrite them. Once this is done, you can define the Spark default 
configurations in its config file living in conf/spark.default.conf.

Cheers,
Ndjido.

> On 08 Aug 2016, at 07:31, Polina Marasanova 
> mailto:polina.marasan...@quantium.com.au>> 
> wrote:
>
> Hi everyone,
>
> I have a question: in previous versions of Zeppelin all settings for 
> interpreters were stored in a file called "interpreter.json". It was very 
> convenient to provide there some default spark settings such as spark.master, 
> default driver memory and etc.
> What is the best way for version 0.6.0 to provide a bunch of defaults 
> specific to a project?
>
> Thanks
> Cheers
> Polina Marasanova


RE: Config for spark interpreter

2016-09-07 Thread Polina Marasanova
Hi Mina,

Thank you for your response.
I double checked approach 1 and 3, still no luck.
Probably the point is that I'm not using default Zeppelin binary 
zeppelin-0.6.0-bin-netinst.tgz because it doesn't fit our cluster architecture 
(we use MapR 5.1). I built it from source code for version 0.6.0 you provide 
here https://zeppelin.apache.org/download.html using parameters:
mvn clean package -Pspark-1.6 -Pmapr51 -Pyarn -DskipTests -DproxySet=true 
-DproxyHost=192.168.16.1 -DproxyPort=8080

And could you please clarify for me this point: "Here is one suspicious part 
you might missed, did you create new Spark Interpreter after restarting 
zeppelin?"
I dont create any new interpreters. I just want to use default one with my 
custom settings.

Regards,
Polina 


From: mina lee [mina...@apache.org]
Sent: Friday, 12 August 2016 2:08 PM
To: users@zeppelin.apache.org
Subject: Re: Config for spark interpreter

Hi Polina,

I tried first/third approach with zeppelin-0.6.0-bin-netinst.tgz and both seems 
to work for me.

> 3. restart zeppelin, check interpreter tab.
Here is one suspicious part you might missed, did you create new Spark 
Interpreter after restarting zeppelin?

One thing you need to keep in mind is that properties that is defined in 
interpreter/spark/interpreter-setting.json will take priority over 
spark-default.conf.
To be more specific, you will need to change 
interpreter/spark/interpreter-setting.json file if you want to change one of 
below properties:

  *   spark.app.name<http://spark.app.name>
  *   spark.executor.memory
  *   spark.cores.max
  *   spark.master

The properties other than above, you can configure them in spark-default.conf.

Regards,
Mina

On Tue, Aug 9, 2016 at 8:12 AM Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>> 
wrote:
Hi,

I think I faced a bug of a Zeppelin 0.6.0.
What happened: I'm not able to overwrite spark interpreter properties from 
config, only via GUI.
What I've tried:

first approach, worked on previous version
1. export SPARK_CONF_DIR=/zeppelin/conf/spark
2. add my "spark-defaults.conf" file to this directory
3. restart zeppelin, check interpreter tab.
doesn't work

second approach:
1. add "spark-defaults.conf" to ZEPPELIN_CONF_DIR
2. pass this directory to starting script /bin/zeppelin.sh --config 
ZEPPELIN_CONF_DIR
doesn't work

third approach:
1. Create a file interpreter-setting.json with my configs similar to this one 
https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/interpreter-setting.json
2. add to directory /zeppelin/interpreter/spark
3. restart zeppelin
according to logs looks like zeppelin actually fetching this file, but still no 
changes in GUI

if i add all properties via GUI it works ok, but it means that users should do 
it by themselves which is a bit of a pain

Have anyone seen something similar to my issue?

Thank you.

Regards,
Polina

From: ndj...@gmail.com<mailto:ndj...@gmail.com> 
[ndj...@gmail.com<mailto:ndj...@gmail.com>]
Sent: Monday, 8 August 2016 3:41 PM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Config for spark interpreter

Hi Polina,

You can just define the SPARK_HOME one the conf/zeppelin-Envoyez.sh and get rid 
of any other Spark configuration from that file, otherwise Zeppelin will just 
overwrite them. Once this is done, you can define the Spark default 
configurations in its config file living in conf/spark.default.conf.

Cheers,
Ndjido.

> On 08 Aug 2016, at 07:31, Polina Marasanova 
> mailto:polina.marasan...@quantium.com.au>> 
> wrote:
>
> Hi everyone,
>
> I have a question: in previous versions of Zeppelin all settings for 
> interpreters were stored in a file called "interpreter.json". It was very 
> convenient to provide there some default spark settings such as spark.master, 
> default driver memory and etc.
> What is the best way for version 0.6.0 to provide a bunch of defaults 
> specific to a project?
>
> Thanks
> Cheers
> Polina Marasanova


RE: ActiveDirectoryGroupRealm.java allows user outside of searchBase to login

2016-09-07 Thread Polina Marasanova
Related to this issue:

One more thing. In Zeppelin logs there are many messages like this

16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE << PING
16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE PRINCIPAL << 
16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE TICKET << 
16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE ROLES << 
16/09/08 02:03:46 ERROR NotebookServer: Can't handle message
java.lang.Exception: Invalid ticket  != f2810e7a-de64-4e41-b615-f31cd5bf7d68
at 
org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:117)
at 
org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:56)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at 
org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
at 
org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
at 
org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
at 
org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
at 
org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
at 
org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)

Looks like it's related to auth process.
________
From: Polina Marasanova [polina.marasan...@quantium.com.au]
Sent: Thursday, 8 September 2016 10:41 AM
To: users@zeppelin.apache.org; d...@zeppelin.incubator.apache.org; 
us...@zeppelin.incubator.apache.org
Subject: RE: ActiveDirectoryGroupRealm.java allows user outside of searchBase 
to login

Hi everyone,

I'm experiencing exactly same problem with Zeppelin 0.6.0
It shiro plugin lets everyone in and it cannot be limited by searchbase.
Here is an example of my config. In fact it lets everyone in from OU=Users.

[main]
### A sample for configuring Active Directory Realm
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.searchBase = "CN=Notebook Owner,OU=Software 
Development,OU=Users,DC=companyname,DC=local"
activeDirectoryRealm.principalSuffix = @companyname.local
activeDirectoryRealm.url = ldap://ldap-server.local:389
activeDirectoryRealm.groupRolesMap = "CN=Notebook Owner,OU=Software 
Development,OU=Users,DC=companyname,DC=local":"admin"
activeDirectoryRealm.authorizationCachingEnabled = false
securityManager.realms = $activeDirectoryRealm


sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager

securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login

Could you please take care of this issue. We are seriously blocked by it, but 
really want to start using 0.6.0

Cheers
Thanks
Polina Marasanova

From: Weipu Zhao [zhaoweipu@gmail.com]
Sent: Sunday, 21 August 2016 4:37 AM
To: d...@zeppelin.incubator.apache.org; us...@zeppelin.incubator.apache.org
Subject: ActiveDirectoryGroupRealm.java allows user outside of searchBase to 
login

Hi guys,

When using org.apache.zeppelin.server.ActiveDirectoryGroupRealm as my shiro 
realm on v0.6.0, I have trouble understanding the searchBase config. My 
understanding was shiro should only allow user within that searchBase to login, 
but seems like not the case.  When I trace the code of 
ActiveDirectoryGroupRealm.java, the only place searchBase was used is in method 
getRoleNamesForUser<https://github.com/apache/zeppelin/blob/v0.6.0/zeppelin-server/src/main/java/org/apache/zeppelin/server/ActiveDirectoryGroupRealm.java#L162>
 , if the user is not inside searchBase, a empty roleNames will be return and 
without any exception, thus the user will be login I guess?

I'm not sure if this is expected behaviour or not. I also tried the v0.6.1 and 
seems also have same behaviour. In general I just want to restrict user only in 
certain groups of ActiveDirectory to be able to login. Is that possible without 
rewriting our own Realm?

Thanks,
Weipu




RE: Authenticate 1 user per notebook

2016-09-07 Thread Polina Marasanova
One more thing. In Zeppelin logs there are many messages like this:

16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE << PING
16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE PRINCIPAL << 
16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE TICKET << 
16/09/08 02:03:46 DEBUG NotebookServer: RECEIVE ROLES << 
16/09/08 02:03:46 ERROR NotebookServer: Can't handle message
java.lang.Exception: Invalid ticket  != f2810e7a-de64-4e41-b615-f31cd5bf7d68
at 
org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:117)
at 
org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:56)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at 
org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
at 
org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
at 
org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
at 
org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
at 
org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
at 
org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)

Looks like it's related to auth process.
________
From: Polina Marasanova [polina.marasan...@quantium.com.au]
Sent: Wednesday, 17 August 2016 10:48 AM
To: users@zeppelin.apache.org
Subject: Authenticate 1 user per notebook

Hi everyone,

I'm back with my authentication questions. Here is my shiro.ini config file. 
The problem is that it lets in all users from search base 
"OU=Users,DC=companyname,DC=local"
How can I restrict the access to only one user who owns a notebook? The process 
zeppelin-daemon.sh is running by this user

[main]
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.searchBase = "OU=Users,DC=companyname,DC=local"
activeDirectoryRealm.principalSuffix = @companyname.local
activeDirectoryRealm.url = ldap://ldapserver.companyname.local:389
activeDirectoryRealm.groupRolesMap = "OU=Users,DC=companyname,DC=local":"admin"
activeDirectoryRealm.authorizationCachingEnabled = false
securityManager.realms = $activeDirectoryRealm

sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager

securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login

[roles]
admin = *

[urls]
/** = authc


Thanks
Cheers
Polina


RE: ActiveDirectoryGroupRealm.java allows user outside of searchBase to login

2016-09-07 Thread Polina Marasanova
Hi everyone,

I'm experiencing exactly same problem with Zeppelin 0.6.0
It shiro plugin lets everyone in and it cannot be limited by searchbase.
Here is an example of my config. In fact it lets everyone in from OU=Users.

[main]
### A sample for configuring Active Directory Realm
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.searchBase = "CN=Notebook Owner,OU=Software 
Development,OU=Users,DC=companyname,DC=local"
activeDirectoryRealm.principalSuffix = @companyname.local
activeDirectoryRealm.url = ldap://ldap-server.local:389
activeDirectoryRealm.groupRolesMap = "CN=Notebook Owner,OU=Software 
Development,OU=Users,DC=companyname,DC=local":"admin"
activeDirectoryRealm.authorizationCachingEnabled = false
securityManager.realms = $activeDirectoryRealm


sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager

securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login

Could you please take care of this issue. We are seriously blocked by it, but 
really want to start using 0.6.0

Cheers
Thanks
Polina Marasanova

From: Weipu Zhao [zhaoweipu@gmail.com]
Sent: Sunday, 21 August 2016 4:37 AM
To: d...@zeppelin.incubator.apache.org; us...@zeppelin.incubator.apache.org
Subject: ActiveDirectoryGroupRealm.java allows user outside of searchBase to 
login

Hi guys,

When using org.apache.zeppelin.server.ActiveDirectoryGroupRealm as my shiro 
realm on v0.6.0, I have trouble understanding the searchBase config. My 
understanding was shiro should only allow user within that searchBase to login, 
but seems like not the case.  When I trace the code of 
ActiveDirectoryGroupRealm.java, the only place searchBase was used is in method 
getRoleNamesForUser<https://github.com/apache/zeppelin/blob/v0.6.0/zeppelin-server/src/main/java/org/apache/zeppelin/server/ActiveDirectoryGroupRealm.java#L162>
 , if the user is not inside searchBase, a empty roleNames will be return and 
without any exception, thus the user will be login I guess?

I'm not sure if this is expected behaviour or not. I also tried the v0.6.1 and 
seems also have same behaviour. In general I just want to restrict user only in 
certain groups of ActiveDirectory to be able to login. Is that possible without 
rewriting our own Realm?

Thanks,
Weipu




Authenticate 1 user per notebook

2016-08-16 Thread Polina Marasanova
Hi everyone,

I'm back with my authentication questions. Here is my shiro.ini config file. 
The problem is that it lets in all users from search base 
"OU=Users,DC=companyname,DC=local"
How can I restrict the access to only one user who owns a notebook? The process 
zeppelin-daemon.sh is running by this user

[main]
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = userNameA
activeDirectoryRealm.systemPassword = passwordA
activeDirectoryRealm.searchBase = "OU=Users,DC=companyname,DC=local"
activeDirectoryRealm.principalSuffix = @companyname.local
activeDirectoryRealm.url = ldap://ldapserver.companyname.local:389
activeDirectoryRealm.groupRolesMap = "OU=Users,DC=companyname,DC=local":"admin"
activeDirectoryRealm.authorizationCachingEnabled = false
securityManager.realms = $activeDirectoryRealm
  
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager

securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login

[roles]
admin = *

[urls]
/** = authc


Thanks
Cheers
Polina
   

RE: Config for spark interpreter

2016-08-08 Thread Polina Marasanova
Hi,

I think I faced a bug of a Zeppelin 0.6.0. 
What happened: I'm not able to overwrite spark interpreter properties from 
config, only via GUI.
What I've tried:

first approach, worked on previous version
1. export SPARK_CONF_DIR=/zeppelin/conf/spark 
2. add my "spark-defaults.conf" file to this directory
3. restart zeppelin, check interpreter tab.
doesn't work

second approach:
1. add "spark-defaults.conf" to ZEPPELIN_CONF_DIR
2. pass this directory to starting script /bin/zeppelin.sh --config 
ZEPPELIN_CONF_DIR
doesn't work

third approach:
1. Create a file interpreter-setting.json with my configs similar to this one 
https://github.com/apache/zeppelin/blob/master/spark/src/main/resources/interpreter-setting.json
2. add to directory /zeppelin/interpreter/spark
3. restart zeppelin
according to logs looks like zeppelin actually fetching this file, but still no 
changes in GUI

if i add all properties via GUI it works ok, but it means that users should do 
it by themselves which is a bit of a pain

Have anyone seen something similar to my issue?

Thank you.

Regards,
Polina

From: ndj...@gmail.com [ndj...@gmail.com]
Sent: Monday, 8 August 2016 3:41 PM
To: users@zeppelin.apache.org
Subject: Re: Config for spark interpreter

Hi Polina,

You can just define the SPARK_HOME one the conf/zeppelin-Envoyez.sh and get rid 
of any other Spark configuration from that file, otherwise Zeppelin will just 
overwrite them. Once this is done, you can define the Spark default 
configurations in its config file living in conf/spark.default.conf.

Cheers,
Ndjido.

> On 08 Aug 2016, at 07:31, Polina Marasanova 
>  wrote:
>
> Hi everyone,
>
> I have a question: in previous versions of Zeppelin all settings for 
> interpreters were stored in a file called "interpreter.json". It was very 
> convenient to provide there some default spark settings such as spark.master, 
> default driver memory and etc.
> What is the best way for version 0.6.0 to provide a bunch of defaults 
> specific to a project?
>
> Thanks
> Cheers
> Polina Marasanova


Config for spark interpreter

2016-08-07 Thread Polina Marasanova
Hi everyone,

I have a question: in previous versions of Zeppelin all settings for 
interpreters were stored in a file called "interpreter.json". It was very 
convenient to provide there some default spark settings such as spark.master, 
default driver memory and etc.
What is the best way for version 0.6.0 to provide a bunch of defaults specific 
to a project?

Thanks
Cheers
Polina Marasanova 

RE: Securing ldap password in shiro.ini

2016-08-03 Thread Polina Marasanova
Moon, Vinay,

Thank you for your response!
Unfortunately password hashing works only in [users] section of shiro.ini file 
and LDAP credentials are listed in [main] section.
I'm happy that you have a ticket about it. Hope it will be fixed soon.

Regards,
Polina

From: Vinay Shukla [vinayshu...@gmail.com]
Sent: Thursday, 4 August 2016 8:53 AM
To: users@zeppelin.apache.org
Subject: Re: Securing ldap password in shiro.ini

Moon, Polina,

Unfortunately the Shiro password hasher won't help.

Polina's use case is encrypting password of AD/LDAP that Zeppelin uses to 
connect to an AD/LDAP.

The shiro password hasher encrypts the password that zeppelin stores when 
AD/LDAP is not used and user accounts are kept in shiro itself.

Zeppelin-530 tracks this requirement.

thanks,
Vinay

On Wed, Aug 3, 2016 at 10:14 AM, moon soo Lee 
mailto:m...@apache.org>> wrote:
You can check
http://shiro.apache.org/configuration.html#Configuration-EncryptingPasswords
http://shiro.apache.org/command-line-hasher.html

It looks useful to encrypt user's password. Document says it works for any 
other type of resources as well. I didn't tried it but hope it works your case, 
too.

Thanks,
moon

On Mon, Aug 1, 2016 at 10:08 PM Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>> 
wrote:
Hi everyone,

I'm using Zeppelin with Active Directory authentication. Our LDAP server 
requires authentication as well.
The problem that in shiro.ini ldap password is still visible and user can 
browse it via %sh interpreter

activeDirectoryRealm.systemUsername = my_login
activeDirectoryRealm.systemPassword = secret

What would be a good way to secure ldap password?

Cheers,

Polina Marasanova



Securing ldap password in shiro.ini

2016-08-01 Thread Polina Marasanova
Hi everyone,

I'm using Zeppelin with Active Directory authentication. Our LDAP server 
requires authentication as well.
The problem that in shiro.ini ldap password is still visible and user can 
browse it via %sh interpreter

activeDirectoryRealm.systemUsername = my_login
activeDirectoryRealm.systemPassword = secret

What would be a good way to secure ldap password?

Cheers,

Polina Marasanova

PAM authentication

2016-06-09 Thread Polina Marasanova
Hello,

Probably it will be stupid question: is it possible to do configure Apache 
Shiro to use PAM file?
I found this project on github and it might be useful 
https://github.com/plaflamme/shiro-libpam4j

Cheers
Polina

RE: Interpreter groups list empty

2016-06-07 Thread Polina Marasanova
Thank you so much! It was incredibly fast

From: Jongyoul Lee [jongy...@gmail.com]
Sent: Tuesday, 7 June 2016 11:32 PM
To: users@zeppelin.apache.org
Subject: Re: Interpreter groups list empty

Hi Polina,

It was a bug, and I fixed that. you couldn't show that bug with current master.

Hope this help,
Jongyoul

On Tue, Jun 7, 2016 at 4:39 PM, Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>> 
wrote:
Hi all,

I recently build Zeppelin-0.6.0-SNAPSHOT and it looks and works great, thank 
you.
Unfortunately I cannot add my custom interpreter via interpreters tab. I need 
Apache Drill (connected via jdbc)

Steps to reproduce:

1. Go to interpreters tab
2. Click "+Create"
3. Insert: name, properties, dependencies
4. Try to choose interpreter group from the list. It's empty
5. Try to save

Result: pop-up message "Please fill in interpreter name and choose a group"
Attached please find a screenshot

What can be a possible reason of this issue? Probably it's missing some configs 
in my case.

Regards,
Polina



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Interpreter groups list empty

2016-06-07 Thread Polina Marasanova
Hi all,

I recently build Zeppelin-0.6.0-SNAPSHOT and it looks and works great, thank 
you.
Unfortunately I cannot add my custom interpreter via interpreters tab. I need 
Apache Drill (connected via jdbc)

Steps to reproduce:

1. Go to interpreters tab
2. Click "+Create"
3. Insert: name, properties, dependencies
4. Try to choose interpreter group from the list. It's empty
5. Try to save

Result: pop-up message "Please fill in interpreter name and choose a group"
Attached please find a screenshot

What can be a possible reason of this issue? Probably it's missing some configs 
in my case.

Regards,
Polina 

RE: Getting 'File name too long' error when running scala code in Zeppelin

2016-05-30 Thread Polina Marasanova
Thank you Moon!

https://issues.apache.org/jira/browse/ZEPPELIN-926
Here is an issue. If you need any other details feel free to ask.

From: moon soo Lee [m...@apache.org]
Sent: Tuesday, 31 May 2016 3:07 PM
To: users@zeppelin.apache.org
Subject: Re: Getting 'File name too long' error when running scala code in 
Zeppelin

Do you mind file an issue?
I think i can quickly make a patch.

Thanks,
moon

On Sun, May 29, 2016 at 5:53 PM Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>> 
wrote:
Thanks for the response

I added 'args' property at the interpreter tab like this:

spark %spark (default) , %pyspark , %sql , %dep  edit  restart  remove
Properties

namevalue
args-Xmax-classfile-name=128

But unfortunately it's still reproducible. Any other ways how to do it?

From: moon soo Lee [m...@apache.org<mailto:m...@apache.org>]
Sent: Saturday, 28 May 2016 6:00 AM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Getting 'File name too long' error when running scala code in 
Zeppelin

Hi,

You can see 'args' property on spark interpreter setting, in interpreter menu. 
Could you try there and see if it works?

Thanks,
moon

On Fri, May 27, 2016 at 12:18 AM Polina Marasanova 
mailto:polina.marasan...@quantium.com.au><mailto:polina.marasan...@quantium.com.au<mailto:polina.marasan...@quantium.com.au>>>
 wrote:
Hello,

Sometimes running scala code in Zeppelin I'm getting an error 'File name too 
long'. To keep working in Notebook I need to restart the interpreter after it. 
Seems that it's a Scala bug which can be fixed by this workaround 
https://issues.apache.org/jira/browse/SPARK-4820
Could you please advice me what is the right way to apply this option?
I already tried this to add in spark-defaults.conf this:
 spark.driver.extraJavaOptions   -Xmax-classfile-name=128
And it doesn't help.

Thank you

Cheers
Polina Marasanova


RE: Getting 'File name too long' error when running scala code in Zeppelin

2016-05-29 Thread Polina Marasanova
Thanks for the response 

I added 'args' property at the interpreter tab like this:

spark %spark (default) , %pyspark , %sql , %dep  edit  restart  remove
Properties

namevalue
args-Xmax-classfile-name=128

But unfortunately it's still reproducible. Any other ways how to do it? 

From: moon soo Lee [m...@apache.org]
Sent: Saturday, 28 May 2016 6:00 AM
To: users@zeppelin.apache.org
Subject: Re: Getting 'File name too long' error when running scala code in 
Zeppelin

Hi,

You can see 'args' property on spark interpreter setting, in interpreter menu. 
Could you try there and see if it works?

Thanks,
moon

On Fri, May 27, 2016 at 12:18 AM Polina Marasanova 
mailto:polina.marasan...@quantium.com.au>> 
wrote:
Hello,

Sometimes running scala code in Zeppelin I'm getting an error 'File name too 
long'. To keep working in Notebook I need to restart the interpreter after it. 
Seems that it's a Scala bug which can be fixed by this workaround 
https://issues.apache.org/jira/browse/SPARK-4820
Could you please advice me what is the right way to apply this option?
I already tried this to add in spark-defaults.conf this:
 spark.driver.extraJavaOptions   -Xmax-classfile-name=128
And it doesn't help.

Thank you

Cheers
Polina Marasanova



Getting 'File name too long' error when running scala code in Zeppelin

2016-05-27 Thread Polina Marasanova
Hello,

Sometimes running scala code in Zeppelin I'm getting an error 'File name too 
long'. To keep working in Notebook I need to restart the interpreter after it. 
Seems that it's a Scala bug which can be fixed by this workaround 
https://issues.apache.org/jira/browse/SPARK-4820
Could you please advice me what is the right way to apply this option? 
I already tried this to add in spark-defaults.conf this:
 spark.driver.extraJavaOptions   -Xmax-classfile-name=128
And it doesn't help.

Thank you

Cheers
Polina Marasanova




Getting 'File name too long' error when running scala code in Zeppelin

2016-05-27 Thread Polina Marasanova
Hello,

Sometimes running scala code in Zeppelin I'm getting an error 'File name too 
long'. To keep working in Notebook I need to restart the interpreter after it. 
Seems that it's a Scala bug which can be fixed by this workaround 
https://issues.apache.org/jira/browse/SPARK-4820
Could you please advice me what is the right way to apply this option? 
I already tried this to add in spark-defaults.conf this:
 spark.driver.extraJavaOptions   -Xmax-classfile-name=128
And it doesn't help.

Thank you

Cheers
Polina Marasanova