Re: java.lang.ClassNotFoundException: $anonfun$1

2017-03-08 Thread Rob Anderson
Rewriting the paragraph to use sparkSession fixes most of the class
issues.  The remaining class issues were resolved by upgrading to
SPARK2-2.0.0.cloudera2-1,
which was released on ‎02-24-2017 (
http://community.cloudera.com/t5/Community-News-Release/ANNOUNCE-Spark-2-0-Release-2/m-p/51464#M161
).

Thanks,

Rob





On Tue, Mar 7, 2017 at 4:49 PM, Jianfeng (Jeff) Zhang <
jzh...@hortonworks.com> wrote:

> >>>  It appears that  during execution time on the yarn hosts, the native
> CDH spark1.5 jars are loaded before the new spark2 jars.  I've tried
> using spark.yarn.archive to specify the spark2 jars in hdfs as well as
> using other spark options, none of which seems to make a difference.
>
> Where do you see “ spark1.5 jars are loaded before the new spark2 jars” ?
>
> Best Regard,
> Jeff Zhang
>
>
> From: Rob Anderson 
> Reply-To: "users@zeppelin.apache.org" 
> Date: Wednesday, March 8, 2017 at 2:29 AM
> To: "users@zeppelin.apache.org" 
> Subject: Re: java.lang.ClassNotFoundException: $anonfun$1
>
> Thanks.  I can reach out to Cloudera, although the same commands seem to
> be work via Spak-Shell (see below).  So, the issue seems unique to
> Zeppelin.
>
> Spark context available as 'sc' (master = yarn, app id =
> application_1472496315722_481416).
>
> Spark session available as 'spark'.
>
> Welcome to
>
>     __
>
>  / __/__  ___ _/ /__
>
> _\ \/ _ \/ _ `/ __/  '_/
>
>/___/ .__/\_,_/_/ /_/\_\   version 2.0.0.cloudera1
>
>   /_/
>
>
>
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
> 1.8.0_60)
>
> Type in expressions to have them evaluated.
>
> Type :help for more information.
>
>
> scala> val taxonomy = sc.textFile("/user/user1/data/")
>
> taxonomy: org.apache.spark.rdd.RDD[String] = /user/user1/data/ 
> MapPartitionsRDD[1]
> at textFile at :24
>
>
> scala> .map(l => l.split("\t"))
>
> res0: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at
> map at :27
>
>
> scala> taxonomy.first
>
> res1: String = 43 B&B 459Sheets & Pillow 45 Sheets1 Sheets
>
> On Mon, Mar 6, 2017 at 6:48 PM, moon soo Lee  wrote:
>
>> Hi Rob,
>>
>> Thanks for sharing the problem.
>> fyi, https://issues.apache.org/jira/browse/ZEPPELIN-1735 is tracking the
>> problem.
>>
>> If we can get help from cloudera forum, that would be great.
>>
>> Thanks,
>> moon
>>
>> On Tue, Mar 7, 2017 at 10:08 AM Jeff Zhang  wrote:
>>
>>>
>>> It seems CDH specific issue, you might be better to ask cloudera forum.
>>>
>>>
>>> Rob Anderson 于2017年3月7日周二 上午9:02写道:
>>>
>>> Hey Everyone,
>>>
>>> We're running Zeppelin 0.7.0.  We've just cut over to spark2, using
>>> scala11 via the CDH parcel (SPARK2-2.0.0.cloudera1-1.cdh5
>>> .7.0.p0.113931).
>>>
>>> Running a simple job, throws a "Caused by:
>>> java.lang.ClassNotFoundException: $anonfun$1".  It appears that  during
>>> execution time on the yarn hosts, the native CDH spark1.5 jars are loaded
>>> before the new spark2 jars.  I've tried using spark.yarn.archive to specify
>>> the spark2 jars in hdfs as well as using other spark options, none of which
>>> seems to make a difference.
>>>
>>>
>>> Any suggestions you can offer is appreciated.
>>>
>>> Thanks,
>>>
>>> Rob
>>>
>>> 
>>>
>>>
>>> %spark
>>> val taxonomy = sc.textFile("/user/user1/data/")
>>>  .map(l => l.split("\t"))
>>>
>>> %spark
>>> taxonomy.first
>>>
>>>
>>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>>> 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage
>>> 1.0 (TID 7, data08.hadoop.prod.ostk.com, executor 2):
>>> java.lang.ClassNotFoundException: $anonfun$1
>>> at org.apache.spark.repl.ExecutorClassLoader.findClass(Executor
>>> ClassLoader.scala:82)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.serializer.JavaDeserializationStream$$anon$
>>> 1.resolveClass(JavaSerializer.scala:67)
>>> at java.io

Re: java.lang.ClassNotFoundException: $anonfun$1

2017-03-07 Thread Rob Anderson
Thanks.  I can reach out to Cloudera, although the same commands seem to be
work via Spak-Shell (see below).  So, the issue seems unique to Zeppelin.

Spark context available as 'sc' (master = yarn, app id =
application_1472496315722_481416).

Spark session available as 'spark'.

Welcome to

    __

 / __/__  ___ _/ /__

_\ \/ _ \/ _ `/ __/  '_/

   /___/ .__/\_,_/_/ /_/\_\   version 2.0.0.cloudera1

  /_/



Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_60)

Type in expressions to have them evaluated.

Type :help for more information.


scala> val taxonomy = sc.textFile("/user/user1/data/")

taxonomy: org.apache.spark.rdd.RDD[String] = /user/user1/data/
MapPartitionsRDD[1]
at textFile at :24


scala> .map(l => l.split("\t"))

res0: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map
at :27


scala> taxonomy.first

res1: String = 43 B&B 459 Sheets & Pillow 45 Sheets 1 Sheets

On Mon, Mar 6, 2017 at 6:48 PM, moon soo Lee  wrote:

> Hi Rob,
>
> Thanks for sharing the problem.
> fyi, https://issues.apache.org/jira/browse/ZEPPELIN-1735 is tracking the
> problem.
>
> If we can get help from cloudera forum, that would be great.
>
> Thanks,
> moon
>
> On Tue, Mar 7, 2017 at 10:08 AM Jeff Zhang  wrote:
>
>>
>> It seems CDH specific issue, you might be better to ask cloudera forum.
>>
>>
>> Rob Anderson 于2017年3月7日周二 上午9:02写道:
>>
>> Hey Everyone,
>>
>> We're running Zeppelin 0.7.0.  We've just cut over to spark2, using
>> scala11 via the CDH parcel (SPARK2-2.0.0.cloudera1-1.cdh5.7.0.p0.113931).
>>
>> Running a simple job, throws a "Caused by: java.lang.ClassNotFoundException:
>> $anonfun$1".  It appears that  during execution time on the yarn hosts,
>> the native CDH spark1.5 jars are loaded before the new spark2 jars.  I've
>> tried using spark.yarn.archive to specify the spark2 jars in hdfs as well
>> as using other spark options, none of which seems to make a difference.
>>
>>
>> Any suggestions you can offer is appreciated.
>>
>> Thanks,
>>
>> Rob
>>
>> 
>>
>>
>> %spark
>> val taxonomy = sc.textFile("/user/user1/data/")
>>  .map(l => l.split("\t"))
>>
>> %spark
>> taxonomy.first
>>
>>
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>> 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage
>> 1.0 (TID 7, data08.hadoop.prod.ostk.com, executor 2): 
>> java.lang.ClassNotFoundException:
>> $anonfun$1
>> at org.apache.spark.repl.ExecutorClassLoader.findClass(
>> ExecutorClassLoader.scala:82)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:348)
>> at org.apache.spark.serializer.JavaDeserializationStream$$
>> anon$1.resolveClass(JavaSerializer.scala:67)
>> at java.io.ObjectInputStream.readNonProxyDesc(
>> ObjectInputStream.java:1613)
>> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
>> at java.io.ObjectInputStream.readOrdinaryObject(
>> ObjectInputStream.java:1774)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(
>> ObjectInputStream.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(
>> ObjectInputStream.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(
>> ObjectInputStream.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(
>> ObjectInputStream.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(
>> ObjectInputStream.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(
>> ObjectInputStream.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
>> at org.apache.spark.serializer.JavaDeserializationStream.
>> readObject(JavaSerializer.scala:75)
>> at org.apache.spark.serializer.Java

java.lang.ClassNotFoundException: $anonfun$1

2017-03-06 Thread Rob Anderson
Hey Everyone,

We're running Zeppelin 0.7.0.  We've just cut over to spark2, using scala11
via the CDH parcel (SPARK2-2.0.0.cloudera1-1.cdh5.7.0.p0.113931).

Running a simple job, throws a "Caused by: java.lang.ClassNotFoundException:
$anonfun$1".  It appears that  during execution time on the yarn hosts, the
native CDH spark1.5 jars are loaded before the new spark2 jars.  I've tried
using spark.yarn.archive to specify the spark2 jars in hdfs as well as
using other spark options, none of which seems to make a difference.


Any suggestions you can offer is appreciated.

Thanks,

Rob




%spark
val taxonomy = sc.textFile("/user/user1/data/")
 .map(l => l.split("\t"))

%spark
taxonomy.first


org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage
1.0 (TID 7, data08.hadoop.prod.ostk.com, executor 2):
java.lang.ClassNotFoundException: $anonfun$1
at org.apache.spark.repl.ExecutorClassLoader.findClass(Executor
ClassLoader.scala:82)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$
1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readOb
ject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deseriali
ze(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
Executor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
lExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: $anonfun$1
at java.lang.ClassLoader.findClass(ClassLoader.java:530)
at org.apache.spark.util.ParentClassLoader.findClass(ParentClas
sLoader.scala:26)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClas
sLoader.scala:34)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.util.ParentClassLoader.loadClass(ParentClas
sLoader.scala:30)
at org.apache.spark.repl.ExecutorClassLoader.findClass(Executor
ClassLoader.scala:77)
... 30 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$sch
eduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$
1.apply(DAGScheduler.scala:1442)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$
1.apply(DAGScheduler.scala:1441)
at scala.collection.mutable.ResizableArray$class.foreach(Resiza
bleArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGSchedu
ler.scala:1441)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskS
etFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskS
etFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOn
Receive(DAGScheduler.scala:1669)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onRe
ceive(DAGScheduler.scala:1624)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onRe
ceive(DAGScheduler.scala:1613)

Re: Monitoring Zeppelin Health Via Rest API

2017-01-25 Thread Rob Anderson
Ok, thanks for the reply Jongyoul.

On Wed, Jan 25, 2017 at 12:13 AM, Jongyoul Lee  wrote:

> AFAIK, Zeppelin doesn't have it for now. We have to develop that function.
>
> Regards,
> Jongyoul
>
> On Wed, Jan 25, 2017 at 3:18 AM, Rob Anderson 
> wrote:
>
>> Hello,
>>
>> We're running Zeppelin 0.6.2  and authenticating against Active
>> Directory via Shiro.  Everything is working pretty well, however, we do
>> occasionally have issues, which is leading to a bad user experience, as
>> operationally we're unaware of a problem.
>>
>> We'd like to monitor the health of Zeppelin via the rest api, however, I
>> don't see a way to programmatically authenticate, so we can make the
>> calls.  Does anyone have any recommendations?
>>
>> Thanks,
>>
>> Rob
>>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>


Monitoring Zeppelin Health Via Rest API

2017-01-24 Thread Rob Anderson
Hello,

We're running Zeppelin 0.6.2  and authenticating against Active Directory
via Shiro.  Everything is working pretty well, however, we do occasionally
have issues, which is leading to a bad user experience, as operationally
we're unaware of a problem.

We'd like to monitor the health of Zeppelin via the rest api, however, I
don't see a way to programmatically authenticate, so we can make the
calls.  Does anyone have any recommendations?

Thanks,

Rob


does %jdbc(hive) work with kerberos in 0.6.2?

2016-12-22 Thread Rob Anderson
Does %jdbc(hive) work with kerberos in 0.6.2?  I know it works with the 0.7
snapshot, but am having having issues getting it to work in 0.6.2

Please advise.

Thanks,

Rob


Re: Hive interpreter connecting to secure cluster

2016-07-11 Thread Rob Anderson
Excellent, I'll test it this afternoon!  Thanks!

On Sat, Jul 9, 2016 at 8:24 AM, Prabhjyot Singh 
wrote:

> Hi Rob,
>
> I have a pull request ready to be review;
>
> https://github.com/apache/zeppelin/pull/1157
>
> Looking forward for your review.
>
> On 8 July 2016 at 01:11, Rob Anderson  wrote:
>
>> Great news, thanks!
>>
>> On Thu, Jul 7, 2016 at 12:27 PM, Vinay Shukla 
>> wrote:
>>
>>> We are working on it, expect to fix it in next couple of weeks.
>>>
>>> Thanks,
>>> Vinay
>>>
>>>
>>> On Thursday, July 7, 2016, Rob Anderson 
>>> wrote:
>>>
>>>> Does anyone know if/when the Hive interpreter will be able to connect
>>>> to a secure, kerberized cluster?
>>>>
>>>> Thanks,
>>>>
>>>> Rob
>>>>
>>>
>>
>
>
> --
> Thankx and Regards,
>
> Prabhjyot Singh
>


Re: Hive interpreter connecting to secure cluster

2016-07-07 Thread Rob Anderson
Great news, thanks!

On Thu, Jul 7, 2016 at 12:27 PM, Vinay Shukla  wrote:

> We are working on it, expect to fix it in next couple of weeks.
>
> Thanks,
> Vinay
>
>
> On Thursday, July 7, 2016, Rob Anderson 
> wrote:
>
>> Does anyone know if/when the Hive interpreter will be able to connect to
>> a secure, kerberized cluster?
>>
>> Thanks,
>>
>> Rob
>>
>


Hive interpreter connecting to secure cluster

2016-07-07 Thread Rob Anderson
Does anyone know if/when the Hive interpreter will be able to connect to a
secure, kerberized cluster?

Thanks,

Rob


Re: Shiro LDAP w/ Search Bind Authentication

2016-07-06 Thread Rob Anderson
You can find some documentation on it here:
https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/security/shiroauthentication.html

I believe you'll need to be running the .6 release or .7 snapshot to use
shiro.

We're authing against AD via ldaps calls without issue.  We're then using
group memberships to define roles and control access to notebooks.

Hope that helps.

Rob


On Wed, Jul 6, 2016 at 2:01 PM, Benjamin Kim  wrote:

> I have been trying to find documentation on how to enable LDAP
> authentication, but I cannot find how to enter the values for these
> configurations. This is necessary because our LDAP server is secured. Here
> are the properties that I need to set:
>
>- ldap_cert
>- use_start_tls
>- bind_dn
>- bind_password
>
>
> Can someone help?
>
> Thanks,
> Ben
>
>


Re: Phoenix plugin with a secure cluster.

2016-07-06 Thread Rob Anderson
Great, thanks Vinay.  We're not currently using jdbc, but certainly would
be open to it.

Slight chance of topic (I can open a new thread if preferred). Do you
happen to know if the hive interpreter can connect to a secure cluster?

Thanks again

On Wed, Jul 6, 2016 at 11:50 AM, Vinay Shukla  wrote:

> Don't think today Phoenix interpreter supports Kerberos.
>
> Are you using JDBC interpreter by any chance? (Which also doesn't yet
> support Kerberos, AFAIK).
>
> We will try to resolve this soon.
>
> Thanks,
> Vinay
>
> On Wed, Jul 6, 2016 at 10:37 AM, Rob Anderson 
> wrote:
>
>> Does anyone know if the Phoenix plugin can connect to a secure
>> cluster...?  Poking around, I didn't see any docs that indicated that
>> kerberos is supported, or how one would configure such a use case.
>>
>> We're running cdh5.5.2 (hbase-1.0.0 with phoenix-4.7).
>>
>> Please advise.
>>
>> Thanks,
>>
>> Rob
>>
>
>


Phoenix plugin with a secure cluster.

2016-07-06 Thread Rob Anderson
Does anyone know if the Phoenix plugin can connect to a secure cluster...?
Poking around, I didn't see any docs that indicated that kerberos is
supported, or how one would configure such a use case.

We're running cdh5.5.2 (hbase-1.0.0 with phoenix-4.7).

Please advise.

Thanks,

Rob


Re: Authentication in zeppelin

2016-06-22 Thread Rob Anderson
There was a bug fix / enhancement that went out last week, to support
group-to-role mappings, from a directory server, via ldap(s) calls.  See
https://github.com/apache/zeppelin/pull/986.  I'm not sure if it's
compatible with JWT tokens, I would guess not.

I'm using AD on the back end. I've got groups mapped to roles, which are
then used for the notebook R/W permissions.  Works great.

Rob

On Wed, Jun 22, 2016 at 2:07 AM, Abhisar Mohapatra <
abhisar.mohapa...@inmobi.com> wrote:

>
> I am using basic Shiro based authentication inbuilt in Zeppelin 0.6.0.
> I have got a certain use case where we have a separate SSO system which
> once successfully authenticated gives me back a JWT token with user info
> and groups. Can this info be used to give notebook level read-write access
> and share access ?
>
>
> Thanks,
> Abhisar
>
>
>
> _
> The information contained in this communication is intended solely for the
> use of the individual or entity to whom it is addressed and others
> authorized to receive it. It may contain confidential or legally privileged
> information. If you are not the intended recipient you are hereby notified
> that any disclosure, copying, distribution or taking any action in reliance
> on the contents of this information is strictly prohibited and may be
> unlawful. If you have received this communication in error, please notify
> us immediately by responding to this email and then delete it from your
> system. The firm is neither liable for the proper and complete transmission
> of the information contained in this communication nor for any delay in its
> receipt.


Re: Zeppelin authentication / permissions while using both Shiro and AD

2016-06-02 Thread Rob Anderson
Done, thanks.

https://issues.apache.org/jira/browse/ZEPPELIN-946

On Wed, Jun 1, 2016 at 1:06 PM, Vinay Shukla  wrote:

> Rob,
>
> It appears to be bug, can you please file a JIRA to track this?
>
> Thanks,
> Vinay
>
> On Fri, May 27, 2016 at 7:52 AM, Rob Anderson 
> wrote:
>
>> Hey Everyone,
>>
>> I'm new to Zeppelin as of this week.  I've managed to build and stand up
>>  the *0.6.0-incubating-SNAPSHOT.  *I've configured Zeppelin to
>> authenticate via Shiro using Active Directory.  I'm able
>> to authenticate without issue.
>>
>> I'm having a problem setting / honoring notebook specific permissions.
>> Based on the documentation, I should be able specify a user or group for
>> the read, write or ownership permissions (
>> https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/security/notebook_authorization.html).
>> This works as expected if I specify a username, but groups and roles do not
>> seem to work.
>>
>> *Error:*
>> Insufficient privileges to write notebook.
>> Allowed users or roles: [admin, zeppelinWrite]
>> But the user randerson belongs to: [randerson]
>>
>> It's seems clear that user randerson isn't mapped to any roles, or groups
>> (even though he of course is a member of the zeppelinWrite group in AD
>> and as a result also part of the local admin Role).  A TCPDUMP reveals
>> that during login, all of my group memberships are in fact returned during
>> the ldap bind operation.  However, when I attempt to modify a notebook, a
>> call is never made to AD, to pull back my group memberships.  It doesn't
>> seem to look at my local group memberships (/etc/group) either.
>>
>> I'm guessing I'm misunderstanding a concept(s) and / or missing a config
>> option(s) (although I have tried numerous combinations of everything I can
>> find online).  My Shiro.ini is listed below.  Any help you can offer is
>> appreciated.
>>
>> Thanks much,
>>
>> Rob
>> ---
>> shiro.ini
>>
>> [users]
>>
>> [main]
>> adRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm
>> adRealm.url = ldap://:389
>> adRealm.groupRolesMap = "cn=zeppelinWrite,ou=unix
>> groups,ou=groups,ou=accounts,cn=users,dc=company,dc=com":"admin"
>> adRealm.searchBase = DC=company,DC=com
>> adRealm.systemUsername= 
>> adRealm.systemPassword= 
>> adRealm.principalSuffix=<@company>
>>
>> sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
>> securityManager.sessionManager = $sessionManager
>> securityManager.sessionManager.globalSessionTimeout = 8640
>> shiro.loginUrl = /api/login
>> securityManager.realms = $adRealm
>> [roles]
>> admin = *
>> [urls]
>> /api/version = anon
>> /** = authcBasic
>>
>>
>


Fwd: Zeppelin authentication / permissions while using both Shiro and AD

2016-05-27 Thread Rob Anderson
Hey Everyone,

I'm new to Zeppelin as of this week.  I've managed to build and stand up
 the *0.6.0-incubating-SNAPSHOT.  *I've configured Zeppelin to authenticate
via Shiro using Active Directory.  I'm able to authenticate without issue.

I'm having a problem setting / honoring notebook specific permissions.
Based on the documentation, I should be able specify a user or group for
the read, write or ownership permissions (
https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/security/notebook_authorization.html).
This works as expected if I specify a username, but groups and roles do not
seem to work.

*Error:*
Insufficient privileges to write notebook.
Allowed users or roles: [admin, zeppelinWrite]
But the user randerson belongs to: [randerson]

It's seems clear that user randerson isn't mapped to any roles, or groups
(even though he of course is a member of the zeppelinWrite group in AD and
as a result also part of the local admin Role).  A TCPDUMP reveals that
during login, all of my group memberships are in fact returned during the
ldap bind operation.  However, when I attempt to modify a notebook, a call
is never made to AD, to pull back my group memberships.  It doesn't seem to
look at my local group memberships (/etc/group) either.

I'm guessing I'm misunderstanding a concept(s) and / or missing a config
option(s) (although I have tried numerous combinations of everything I can
find online).  My Shiro.ini is listed below.  Any help you can offer is
appreciated.

Thanks much,

Rob
---
shiro.ini

[users]

[main]
adRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm
adRealm.url = ldap://:389
adRealm.groupRolesMap = "cn=zeppelinWrite,ou=unix
groups,ou=groups,ou=accounts,cn=users,dc=company,dc=com":"admin"
adRealm.searchBase = DC=company,DC=com
adRealm.systemUsername= 
adRealm.systemPassword= 
adRealm.principalSuffix=<@company>

sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login
securityManager.realms = $adRealm
[roles]
admin = *
[urls]
/api/version = anon
/** = authcBasic


Zeppelin authentication / permissions while using both Shiro and AD

2016-05-26 Thread Rob Anderson
Hey Everyone,

I'm new to Zeppelin as of this week.  I've managed to build and stand up
 the *0.6.0-incubating-SNAPSHOT.  *I've configured Zeppelin to authenticate
via Shiro using Active Directory.  I'm able to authenticate without issue.

I'm having a problem setting / honoring notebook specific permissions.
Based on the documentation, I should be able specify a user or group for
the read, write or ownership permissions (
https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/security/notebook_authorization.html).
This works as expected if I specify a username, but groups and roles do not
seem to work.

*Error:*
Insufficient privileges to write notebook.
Allowed users or roles: [admin, zeppelinWrite]
But the user randerson belongs to: [randerson]

It's seems clear that user randerson isn't mapped to any roles, or groups
(even though he of course is a member of the zeppelinWrite group in AD and
as a result also part of the local admin Role).  A TCPDUMP reveals that
during login, all of my group memberships are in fact returned during the
ldap bind operation.  However, when I attempt to modify a notebook, a call
is never made to AD, to pull back my group memberships.  It doesn't seem to
look at my local group memberships (/etc/group) either.

I'm guessing I'm misunderstanding a concept(s) and / or missing a config
option(s) (although I have tried numerous combinations of everything I can
find online).  My Shiro.ini is listed below.  Any help you can offer is
appreciated.

Thanks much,

Rob
---
shiro.ini

[users]

[main]
adRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm
adRealm.url = ldap://:389
adRealm.groupRolesMap = "cn=zeppelinWrite,ou=unix
groups,ou=groups,ou=accounts,cn=users,dc=company,dc=com":"admin"
adRealm.searchBase = DC=company,DC=com
adRealm.systemUsername= 
adRealm.systemPassword= 
adRealm.principalSuffix=<@company>

sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login
securityManager.realms = $adRealm
[roles]
admin = *
[urls]
/api/version = anon
/** = authcBasic