[jira] [Updated] (RANGER-1375) HIVERangerAuthorizerTest UT fails intermittently

2017-02-10 Thread Sailaja Polavarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/RANGER-1375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailaja Polavarapu updated RANGER-1375:
---
Attachment: 0001-RANGER-1375-HIVERangerAuthorizerTest-UT-fails-interm.patch

> HIVERangerAuthorizerTest UT fails intermittently
> 
>
> Key: RANGER-1375
> URL: https://issues.apache.org/jira/browse/RANGER-1375
> Project: Ranger
>  Issue Type: Bug
>  Components: Ranger
>Affects Versions: 0.7.0
>Reporter: Yesha Vora
>Assignee: Sailaja Polavarapu
> Fix For: 0.7.0
>
> Attachments: 
> 0001-RANGER-1375-HIVERangerAuthorizerTest-UT-fails-interm.patch
>
>
> Trying to run HIVERangerAuthorizerTest in my linux machine as nobody user. 
> This test fails intermittently with below error stack.
> {code}
> Error Message
> Error while compiling statement: FAILED: HiveAccessControlException 
> Permission denied: user [nobody] does not have [CREATE] privilege on 
> [rangerauthz]
> Stacktrace
> org.apache.hive.service.cli.HiveSQLException: Error while compiling 
> statement: FAILED: HiveAccessControlException Permission denied: user 
> [nobody] does not have [CREATE] privilege on [rangerauthz]
>   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:262)
>   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:248)
>   at 
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:297)
>   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:238)
>   at 
> org.apache.ranger.services.hive.HIVERangerAuthorizerTest.setup(HIVERangerAuthorizerTest.java:102)
> Caused by: org.apache.hive.service.cli.HiveSQLException: Error while 
> compiling statement: FAILED: HiveAccessControlException Permission denied: 
> user [nobody] does not have [CREATE] privilege on [rangerauthz]
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:148)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:276)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:468)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:456)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>   at com.sun.proxy.$Proxy22.executeStatementAsync(Unknown Source)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:298)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> Caused by: 
> org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException:
>  Permission denied: user [nobody] does not have [CREATE] privilege on 
> [rangerauthz]
>   at 
> 

Re: Last date for submitting abstracts - DW summit

2017-02-10 Thread Balaji Ganesan
Thanks, Ramesh. The topic is very relevant to the community, appreciate
submitting the abstract.

-Balaji

On Fri, Feb 10, 2017 at 11:22 AM, Ramesh Mani  wrote:

> Balaji,
>
> I have submitted now a Abstract on how to "Extending Ranger Authorization
> Model to other Applications²
>
> Thanks,
> Ramesh
>
> On 2/10/17, 10:03 AM, "Balaji Ganesan"  wrote:
>
> >Rangers,
> >
> >Today is the deadline for submitting abstracts to Data Works/Hadoop Summit
> >sessions in San Jose.
> >
> >https://dataworkssummit.com/san-jose-2017/abstracts/submit-abstract/
> >
> >It is a good opportunity to talk about your work and experience using
> >Ranger,  and share it with the community.
> >
> >Thanks,
> >Balaji
>
>


[jira] [Created] (RANGER-1375) HIVERangerAuthorizerTest UT fails intermittently

2017-02-10 Thread Yesha Vora (JIRA)
Yesha Vora created RANGER-1375:
--

 Summary: HIVERangerAuthorizerTest UT fails intermittently
 Key: RANGER-1375
 URL: https://issues.apache.org/jira/browse/RANGER-1375
 Project: Ranger
  Issue Type: Bug
  Components: Ranger
Reporter: Yesha Vora


Trying to run HIVERangerAuthorizerTest in my linux machine as nobody user. This 
test fails intermittently with below error stack.
{code}
Error Message

Error while compiling statement: FAILED: HiveAccessControlException Permission 
denied: user [nobody] does not have [CREATE] privilege on [rangerauthz]
Stacktrace

org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: 
FAILED: HiveAccessControlException Permission denied: user [nobody] does not 
have [CREATE] privilege on [rangerauthz]
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:262)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:248)
at 
org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:297)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:238)
at 
org.apache.ranger.services.hive.HIVERangerAuthorizerTest.setup(HIVERangerAuthorizerTest.java:102)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling 
statement: FAILED: HiveAccessControlException Permission denied: user [nobody] 
does not have [CREATE] privilege on [rangerauthz]
at 
org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)
at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:148)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:226)
at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:276)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:468)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy22.executeStatementAsync(Unknown Source)
at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:298)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: 
org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException:
 Permission denied: user [nobody] does not have [CREATE] privilege on 
[rangerauthz]
at 
org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:412)
at org.apache.hadoop.hive.ql.Driver.doAuthorizationV2(Driver.java:856)
at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:644)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:511)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:321)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1221)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1215)
at 

[jira] [Commented] (RANGER-1320) Ranger Hive Plugin Exception message correction

2017-02-10 Thread Ramesh Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/RANGER-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862018#comment-15862018
 ] 

Ramesh Mani commented on RANGER-1320:
-

commit link http://git-wip-us.apache.org/repos/asf/ranger/commit/239cc55e

> Ranger Hive Plugin Exception message correction
> ---
>
> Key: RANGER-1320
> URL: https://issues.apache.org/jira/browse/RANGER-1320
> Project: Ranger
>  Issue Type: Bug
>  Components: Ranger
>Reporter: Raffi Abberbock
>Assignee: Ramesh Mani
>
> When accessing Hive with Ranger authorization (using the Ambari Hive View) an 
> error message is returned when the user is denied access. If the user does 
> not have access to the table, the error message will say the user does not 
> have privileges on the table, and then it lists all the columns in the table. 
> As a security precaution, if the user does not have access to the table we 
> would not want them to know about the columns too. We may not even want to 
> acknowledge the table exists at all. 
> Is there a way to customize the error message returned by Ranger?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (RANGER-1357) tag objects are not removed when attempting a full sync with Atlas tags

2017-02-10 Thread Velmurugan Periasamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/RANGER-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Velmurugan Periasamy resolved RANGER-1357.
--
Resolution: Fixed

Patch from https://reviews.apache.org/r/56414/ is committed now. 

master - 
https://git-wip-us.apache.org/repos/asf?p=ranger.git;a=commit;h=e4122919c46bd9fec38a76b27c7e0f5fb98a5f5b
ranger-0.7 - 
https://git-wip-us.apache.org/repos/asf?p=ranger.git;a=commit;h=37a253d4f175d1fa1b8667d3125a7592e7918529

> tag objects are not removed when attempting a full sync with Atlas tags
> ---
>
> Key: RANGER-1357
> URL: https://issues.apache.org/jira/browse/RANGER-1357
> Project: Ranger
>  Issue Type: Bug
>  Components: tagsync
>Affects Versions: 0.7.0
>Reporter: Abhay Kulkarni
>Assignee: Abhay Kulkarni
> Fix For: 0.7.0
>
>
> Atlas entity which is previously associated with one or more tags, and has 
> such associations mirrored in ranger-admin database, is modified to have no 
> tags when tagsync is down or unable to process tag events. A full-sync is now 
> performed using Atlas REST API to synchronize ranger-admin tag objects with 
> Atlas objects. It is expected that the tags not in Atlas any more, are 
> removed from ranger-admin database after full-sync, however, those tags and 
> their associations with entities are still in the ranger-admin database.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Review Request 56335: RANGER-1310: Ranger Audit framework enhancement to provide an option to allow audit records to be spooled to local disk first before sending it to destinations

2017-02-10 Thread Ramesh Mani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56335/
---

(Updated Feb. 10, 2017, 10:50 p.m.)


Review request for ranger, Don Bosco Durai, Abhay Kulkarni, Madhan Neethiraj, 
and Velmurugan Periasamy.


Changes
---

Bug number and Branch added


Summary (updated)
-

 RANGER-1310: Ranger Audit framework enhancement to provide an option to allow 
audit records to be spooled to local disk first before sending it to 
destinations


Bugs: RANGER-1310
https://issues.apache.org/jira/browse/RANGER-1310


Repository: ranger


Description (updated)
---

RANGER-1310: Ranger Audit framework enhancement to provide an option to allow 
audit records to be spooled to local disk first before sending it to 
destinations


Diffs
-

  
agents-audit/src/main/java/org/apache/ranger/audit/destination/HDFSAuditDestination.java
 7c37cfa 
  
agents-audit/src/main/java/org/apache/ranger/audit/provider/AuditFileCacheProvider.java
 PRE-CREATION 
  
agents-audit/src/main/java/org/apache/ranger/audit/provider/AuditProviderFactory.java
 e3c3508 
  
agents-audit/src/main/java/org/apache/ranger/audit/queue/AuditFileCacheProviderSpool.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/56335/diff/


Testing
---

Test all the plugins in Local VM
To enable the file cache provider for each of the components please do the 
following

For HDFS Plugin
===
mkdir -p  /var/log/hadoop/hdfs/audit/spool
cd /var/log/hadoop/hdfs/audit/
chown hdfs:hadoop spool
Add the following properties to the "custom ranger-hive-audit” in the 
Ambari for hdfs. 
xasecure.audit.provider.filecache.is.enabled=true
xasecure.audit.provider.filecache.filespool.file.rollover.sec=300

xasecure.audit.provider.filecache.filespool.dir=/var/log/hadoop/hdfs/audit/spool

   NOTE:
xasecure.audit.provider.filecache.is.enabled = true 
   This property will enable file cache provider which will store the 
audit locally first before sending it to destinations to avoid lose of data 
xasecure.audit.provider.filecache.filespool.file.rollover.sec=300
This property will close each of local file every 300 sec ( 5 min ) 
and send it destinations. For testing we maded to 30 sec.

xasecure.audit.provider.filecache.filespool.dir=/var/log/hadoop/hdfs/audit/spool
  This property is the directory where the local audit cache is 
present.

For Hive Plugin
=

   mkdir -p /var/log/hive/audit/spool
cd /var/log/hive/audit/
chown hdfs:hadoop spool
Add the following properties to the "custom ranger-hive-audit” in the Ambari 
for hdfs. 
xasecure.audit.provider.filecache.is.enabled=true
xasecure.audit.provider.filecache.filespool.file.rollover.sec=300
xasecure.audit.provider.filecache.filespool.dir=/var/log/hive/audit/spool

Please do the same steps mentioned  for all the components which  need this 
audit file cache provider.


---
Issues:
- Audit to HDFS destination gets 0 bytes file or missing records in the 
file from HDFS plugin when HDFS get restarted and 
   audit from hdfs plugin is logged into destination.
  
- Audit to HDFS destination gets partial records from 
HIVE/HBASE/KNOX/STORM plugin when HDFS is restarted and there are active 
spooling into hdfs is happening.

Scenarios to test

1) Audit to HDFS / Solr destination with FileCache enabled- 
HDFS/HIVESERVER2/HBASE/KNOX/STORM/KAFKA.
- Mentioned issue should not happen.
- Audit will be getting pushed every 5 minutes ( we are setting 
it to 300 sec in the parameter)

2) Audit to HDFS / Solr destination with FileCache enabled  with one of the 
destination is down and brought back up later.
- Audit from the local cache should be present in destination 
when the destination is up 
- In case of HDFS as destination audit might show up during 
next rollover of hdfs file or  if the corresponding component
   is restarted ( say if it is hiveserver2 plugin, when 
Hiveserver2 is restarted audit into HDFS appears as this will close the
existing opened hdfsfile)
 - Mentioned issue should not be present
 - 
 - 
3) Same has to be done for each for the plugins ( HBASE, STORM, KAFKA, KMS)


Thanks,

Ramesh Mani



Re: Last date for submitting abstracts - DW summit

2017-02-10 Thread Ramesh Mani
Balaji,

I have submitted now a Abstract on how to "Extending Ranger Authorization
Model to other Applications²

Thanks,
Ramesh

On 2/10/17, 10:03 AM, "Balaji Ganesan"  wrote:

>Rangers,
>
>Today is the deadline for submitting abstracts to Data Works/Hadoop Summit
>sessions in San Jose.
>
>https://dataworkssummit.com/san-jose-2017/abstracts/submit-abstract/
>
>It is a good opportunity to talk about your work and experience using
>Ranger,  and share it with the community.
>
>Thanks,
>Balaji



Re: Scalability - large numbers of users/groups in LDAP

2017-02-10 Thread Sailaja Polavarapu
Just want to add few more points inline...

>> - what additional attributes are pulled
Currently we pull following attributes as part of ldap search:
For Users: username (like uid, samaccountname, etc…) and user group member 
attribute (memberof, ismemberof, etc…)
For Groups: group member attribute (member, memberuid, etc…) and group name 
attribute (cn, samaccountname, etc…)

All these are configurable properties in usersync.

Thanks,
Sailaja.





On 2/10/17, 9:26 AM, "Nigel Jones"  wrote:

>On 10/02/2017 17:07, Don Bosco Durai wrote:
>
> > 1.Ranger should have an option just to sync Group (without 
>users). We should be already supporting it or there was an intention to 
>support.  If we are not doing it for any reason, I am a strong +1 for 
>doing it.
>I'll experiment with this - only working off the docs so far, trying it 
>out is next :-)
[Sailaja]: Currently we support syncing groups that don’t contain any users. 
But if the group contains users (as part of member attribute), we still sync 
those users. Ofcourse, you can tweak the user search configuration in order to 
not sync users by providing an invalid/non-matching user search filter. This is 
kind of dirty work around. Same is the case with syncing just users and not 
groups.
I agree that it will be better if we can support syncing just users or just 
groups for flexibility.

>
> > 2.Direct LDAP would have been ideal, but we were worried about 
>the load we might put on LDAP for real-time queries. Just FYI, Ranger 
>uses LDAP/AD for authentication and easy selection of users/groups 
>during policy create. For authentication, it is already real-time (even 
>though I would have preferred to get the roles also in real-time).
>A fair concern, though at least it's only at connect time. The 
>enterprise I spoke to didn't seem to think it was a concern. I'll start 
>with option #1 though
[Sailaja]: Other main reason that we are syncing users/groups from LDAP upfront 
is to make these available for configuring policies in ranger. 
>
> > If you have a very high number of users/groups, then the short-term 
>recommendation to is to apply LDAP filters and limit syncing users only 
>to those using Hadoop.
>This will be extending outside hadoop - I'm trying to determine how to 
>constrain the ldap query to the users using the relevant systems. I can 
>potentially obtain a list of groups from elsewhere via a new usersync 
>process, and then go back into ldap to query membership which would look 
>the same to ranger, just modify that sync.
>
>Thanks for the info !
>
>Nigel.
>
>


Last date for submitting abstracts - DW summit

2017-02-10 Thread Balaji Ganesan
Rangers,

Today is the deadline for submitting abstracts to Data Works/Hadoop Summit
sessions in San Jose.

https://dataworkssummit.com/san-jose-2017/abstracts/submit-abstract/

It is a good opportunity to talk about your work and experience using
Ranger,  and share it with the community.

Thanks,
Balaji


Re: Scalability - large numbers of users/groups in LDAP

2017-02-10 Thread Don Bosco Durai
Seems you are suggesting two scenarios.

1.Ranger should have an option just to sync Group (without users). We 
should be already supporting it or there was an intention to support.  If we 
are not doing it for any reason, I am a strong +1 for doing it. 
2.Direct LDAP would have been ideal, but we were worried about the load we 
might put on LDAP for real-time queries. Just FYI, Ranger uses LDAP/AD for 
authentication and easy selection of users/groups during policy create. For 
authentication, it is already real-time (even though I would have preferred to 
get the roles also in real-time). 

If you have a very high number of users/groups, then the short-term 
recommendation to is to apply LDAP filters and limit syncing users only to 
those using Hadoop.

Thanks

Bosco


On 2/10/17, 6:20 AM, "Nigel Jones"  wrote:

On 10/02/2017 09:58, Velmurugan Periasamy wrote:
 > Hi Nigel:
 >
 > Thanks for starting an interesting thread.

 > I believe this is already addressed by 
https://issues.apache.org/jira/browse/RANGER-869. Please take a look.

I took a look - indeed I had noticed this option to go via groups and 
lookup "member" which does mitigate the issue somewhat, depending on the 
number of groups

In the environment I'm thinking of I can probably find an "interesting" 
list of groups. So I could modify usersync to not just use the 
group->member lookup, but also to ONLY do that for certain groups (I'll 
probably need "groupsync" for that... !)

Whether this work depends on how the ldap server is set up... I need to 
take a look.. if so this is probably good enough for now.

But I'm still wondering if we really need to sync users at all since at 
some point any kind of connector/engine may well be doing an ldap lookup 
anyway - certainly that's true in an engine -- Apache Derby based - that 
I'm looking at (and developing a plugin for). This may become more 
important for large numbers of groups and users especially if we 
consider applying ranger plugins to technologies used by a broad set of 
users.

Out of interest I just noticed in the nifi mailing lists that there was 
a recent thread on "LDAP Group Authorization". There is some discussion 
of native nifi+ranger, but in either case the question about why not get 
the info direct from ldap at connect time is being made. intriguing ...

Thanks for the link ... mulling over some more :-)

nigel.







Re: Scalability - large numbers of users/groups in LDAP

2017-02-10 Thread Nigel Jones

On 10/02/2017 09:58, Velmurugan Periasamy wrote:
> Hi Nigel:
>
> Thanks for starting an interesting thread.

> I believe this is already addressed by 
https://issues.apache.org/jira/browse/RANGER-869. Please take a look.


I took a look - indeed I had noticed this option to go via groups and 
lookup "member" which does mitigate the issue somewhat, depending on the 
number of groups


In the environment I'm thinking of I can probably find an "interesting" 
list of groups. So I could modify usersync to not just use the 
group->member lookup, but also to ONLY do that for certain groups (I'll 
probably need "groupsync" for that... !)


Whether this work depends on how the ldap server is set up... I need to 
take a look.. if so this is probably good enough for now.


But I'm still wondering if we really need to sync users at all since at 
some point any kind of connector/engine may well be doing an ldap lookup 
anyway - certainly that's true in an engine -- Apache Derby based - that 
I'm looking at (and developing a plugin for). This may become more 
important for large numbers of groups and users especially if we 
consider applying ranger plugins to technologies used by a broad set of 
users.


Out of interest I just noticed in the nifi mailing lists that there was 
a recent thread on "LDAP Group Authorization". There is some discussion 
of native nifi+ranger, but in either case the question about why not get 
the info direct from ldap at connect time is being made. intriguing ...


Thanks for the link ... mulling over some more :-)

nigel.




[jira] [Updated] (RANGER-1371) No need to write field initializers for default values, and types where the diamond operator could suffice

2017-02-10 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/RANGER-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated RANGER-1371:

Fix Version/s: 1.0.0

> No need to write field initializers for default values, and types where the 
> diamond operator could suffice
> --
>
> Key: RANGER-1371
> URL: https://issues.apache.org/jira/browse/RANGER-1371
> Project: Ranger
>  Issue Type: Improvement
>  Components: Ranger
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>  Labels: code-cleanup
> Fix For: 1.0.0
>
> Attachments: 
> 0001-RANGER-1371-Remove-unneded-field-initializers-and-un.patch
>
>
> Lot's of places the fields are initialized to their default values, which is 
> unnecessary verbose:
> {code}
> long value = 0L;
> String str = null;
> {code}
> which is equivalent with:
> {code}
> long value;
> String str;
> {code}
> And similarly the type specification can be omitted, as the compiler can 
> calculate it automatically, so instead of 
> {code}
> List conditions = new 
> ArrayList();
> {code}
> it is suffice to write:
> {code}
> List conditions = new ArrayList<>();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (RANGER-1365) Modify Ranger Hbase Plugin ColumnIterator to use Cell instead of KeyValue (to avoid ClassCastException in certain cases)

2017-02-10 Thread Velmurugan Periasamy (JIRA)

[ 
https://issues.apache.org/jira/browse/RANGER-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861302#comment-15861302
 ] 

Velmurugan Periasamy commented on RANGER-1365:
--

[~rmani]/[~sneethiraj]/[~kulkabhay] - I have cherrypicked this patch into 
ranger-0.7 branch.

https://git-wip-us.apache.org/repos/asf?p=ranger.git;a=commit;h=46c446f23eba7b9bb9dd6721c117a96c2ffc64b7



> Modify Ranger Hbase Plugin ColumnIterator to use Cell instead of KeyValue (to 
> avoid ClassCastException in certain cases)
> 
>
> Key: RANGER-1365
> URL: https://issues.apache.org/jira/browse/RANGER-1365
> Project: Ranger
>  Issue Type: Bug
>  Components: plugins, Ranger
>Affects Versions: 0.7.0
>Reporter: Sergio Peleato
>Assignee: Abhay Kulkarni
>Priority: Critical
> Fix For: 0.7.0
>
>
> [RangerAuthorizationCoprocessor|https://github.com/apache/ranger/blob/7a1c72262684a862f8df4ba908c5a2a918cb1f53/hbase-agent/src/main/java/org/apache/ranger/authorization/hbase/RangerAuthorizationCoprocessor.java#L1029]
>  and 
> [ColumnIterator|https://github.com/apache/ranger/blob/eb21ea6afb9f2ca0e26a769bfc6333ba3cce0e61/hbase-agent/src/main/java/org/apache/ranger/authorization/hbase/ColumnIterator.java#L76]
>  need to be modified to safely cast objects into Cell rather than KeyValue as 
> currently done.
> In certain cases, the above issue causes HBase regionserver to throw the 
> below exception.
> {noformat}
> 2017-02-07 18:07:28,786 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor threw 
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$ClonedSeekerState
>  cannot be cast to org.apache.hadoop.hbase.KeyValue
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$ClonedSeekerState
>  cannot be cast to org.apache.hadoop.hbase.KeyValue
>   at 
> org.apache.ranger.authorization.hbase.ColumnIterator.next(ColumnIterator.java:76)
>   at 
> org.apache.ranger.authorization.hbase.ColumnIterator.next(ColumnIterator.java:32)
>   at 
> org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.getColumnFamilies(RangerAuthorizationCoprocessor.java:247)
>   at 
> org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.evaluateAccess(RangerAuthorizationCoprocessor.java:337)
>   at 
> org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:535)
>   at 
> org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.prePut(RangerAuthorizationCoprocessor.java:1029)
>   at 
> org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.prePut(RangerAuthorizationCoprocessor.java:1091)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.prePut(RegionCoprocessorHost.java:881)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doPreMutationHook(HRegion.java:3006)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2981)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2927)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:848)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:304)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:217)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1301)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1699)
>   at 
> 

Re: Scalability - large numbers of users/groups in LDAP

2017-02-10 Thread Velmurugan Periasamy
Hi Nigel:

Thanks for starting an interesting thread.

> In some environments selecting a subset of groups (which may be used as
> roles), and just pulling users there MAY help if the applications being
> secured have a more limited audience

I believe this is already addressed by 
https://issues.apache.org/jira/browse/RANGER-869. Please take a look.

Thank you,
Vel

From: Nigel Jones >
Reply-To: "dev@ranger.apache.org" 
>
Date: Friday, February 10, 2017 at 2:41 AM
To: "d...@ranger.incubator.apache.org" 
>
Subject: Scalability - large numbers of users/groups in LDAP

I've been mulling over an issue recently and interested in any
thoughts... I'm pretty new to ranger to very ready to hear why this
could never work ;-)

Today in an LDAP-managed enterprise environment user & group information
is replicated from the LDAP server such as MS Active Directory by the
usersync process. I have some control over
  - the base DN
  - whether to pull a list of groups from each user, or users from groups
  - what additional attributes are pulled
This is then persisted in ranger & gets pulled by the plugins

However in some environments
  - the numbers of users in LDAP could be very high (100,000+)
  - it may be difficult to scope the query where ranger is securing
access to an enterprise service

If we assume any kind of service that involves a "connect" as well
read/write operations there could be an opportunity to retrieve
user/group information for that user at that point. It could then be
saved within the plugin to be used at data access time.

As a variation, Potentially we could still populate groups (or role)
information in the ranger server, making it easier for policy definitions

Has anyone considered this as an option?

In some environments selecting a subset of groups (which may be used as
roles), and just pulling users there MAY help if the applications being
secured have a more limited audience

if it sounds interesting I'm inclined to work through the flows in more
detail

Thanks
Nigel.





[jira] [Updated] (RANGER-1374) When exceptions occur during using ChangePasswordUtil tool to update admin password, the program doesn't record error messages.

2017-02-10 Thread Qiang Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/RANGER-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang Zhang updated RANGER-1374:

Attachment: 0001-RANGER-1374-When-exceptions-occur-during-using-Chang.patch

> When exceptions occur during using ChangePasswordUtil tool to update admin 
> password, the program doesn't record error messages.
> ---
>
> Key: RANGER-1374
> URL: https://issues.apache.org/jira/browse/RANGER-1374
> Project: Ranger
>  Issue Type: Bug
>  Components: usersync
>Affects Versions: 0.7.0
>Reporter: Qiang Zhang
>Assignee: Qiang Zhang
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-RANGER-1374-When-exceptions-occur-during-using-Chang.patch
>
>
> When exceptions occur during using ChangePasswordUtil tool to update admin 
> password, the program doesn't record error messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Review Request 56536: When exceptions occur during using ChangePasswordUtil tool to update admin password, the program doesn't record error messages.

2017-02-10 Thread Qiang Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56536/
---

Review request for ranger, Alok Lal, Don Bosco Durai, Colm O hEigeartaigh, 
Ramesh Mani, Selvamohan Neethiraj, and Velmurugan Periasamy.


Bugs: RANGER-1374
https://issues.apache.org/jira/browse/RANGER-1374


Repository: ranger


Description
---

When exceptions occur during using ChangePasswordUtil tool to update admin 
password, the program doesn't record error messages.


Diffs
-

  
security-admin/src/main/java/org/apache/ranger/patch/cliutil/ChangePasswordUtil.java
 ccdc279 

Diff: https://reviews.apache.org/r/56536/diff/


Testing
---


Thanks,

Qiang Zhang



Re: Review Request 56163: RANGER-1341 : Use credential provider files to store passwords rather storing them in config file in clear text format

2017-02-10 Thread Pradeep Agrawal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56163/
---

(Updated Feb. 10, 2017, 8:28 a.m.)


Review request for ranger, Ankita Sinha, Don Bosco Durai, Gautam Borad, Abhay 
Kulkarni, Madhan Neethiraj, Mehul Parikh, Ramesh Mani, Selvamohan Neethiraj, 
Sailaja Polavarapu, and Velmurugan Periasamy.


Changes
---

Updated patch based on latest commit.


Bugs: RANGER-1341
https://issues.apache.org/jira/browse/RANGER-1341


Repository: ranger


Description
---

**Problem Statement :** Below mentioned passwords properties in Ranger Admin 
and usersync contains password in clear text. password should not be stored in 
clear text format rather it should be stored in jceks file.
ranger.service.https.attrib.keystore.pass
ranger.truststore.password
ranger.usersync.keystore.password
ranger.usersync.truststore.password

**Proposed Solution :** Use Credential provider api to store password in jceks 
file.


Diffs (updated)
-

  
embeddedwebserver/src/main/java/org/apache/ranger/server/tomcat/EmbeddedServer.java
 9668e47 
  kms/config/webserver/ranger-kms-site.xml 81f3f17 
  kms/scripts/install.properties 473d3cf 
  kms/scripts/setup.sh f31e0e2 
  security-admin/scripts/install.properties 34dec22 
  security-admin/scripts/setup.sh f7e02d9 
  security-admin/src/main/java/org/apache/ranger/common/PropertiesUtil.java 
a0f83c7 
  security-admin/src/main/resources/conf.dist/ranger-admin-default-site.xml 
8cd26a6 
  security-admin/src/main/resources/conf.dist/ranger-admin-site.xml 5f89caa 
  src/main/assembly/admin-web.xml 966033f 
  tagsync/scripts/setup.py 88b10cc 
  
ugsync/src/main/java/org/apache/ranger/unixusersync/config/UserGroupSyncConfig.java
 a4b12b2 
  unixauthservice/scripts/install.properties 50e8487 
  unixauthservice/scripts/setup.py b773e95 
  unixauthservice/scripts/templates/ranger-ugsync-template.xml 74bce8a 

Diff: https://reviews.apache.org/r/56163/diff/


Testing
---

1. Tested Ranger on SSL enabled MySQL.
2. Tested Ranger with and without SSL.
3. Tested HDFS plugin enforecement using SSL enabled Ranger admin. 
4. Tested KMS plugin enforecement using SSL enabled Ranger admin.
5. Tested LDAP and UNIX UserSync.
6. Tested LDAP and UNIX Authentication.
7. Tested Knox Test connection.


Thanks,

Pradeep Agrawal



[jira] [Updated] (RANGER-1289) Error occured in Ranger KMS function

2017-02-10 Thread Qiang Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/RANGER-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang Zhang updated RANGER-1289:

Attachment: 0001-RANGER-1289-Error-occured-in-Ranger-KMS-function.patch

> Error occured in Ranger KMS function
> 
>
> Key: RANGER-1289
> URL: https://issues.apache.org/jira/browse/RANGER-1289
> Project: Ranger
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 0.7.0
>Reporter: Qiang Zhang
>Assignee: Qiang Zhang
>  Labels: patch
> Attachments: 
> 0001-RANGER-1289-Error-occured-in-Ranger-KMS-function.patch
>
>
> Steps:
> 1.Start ranger-kms service
> 2.Configure KMS Client and restart hdfs
> 3.Create a key named key0 in the Ranger Web UI
> 4.Execute the following command in hadoop environment,create an encrypted zone
> {code:java}
> hdfs dfs -mkdir /keyZone
> hdfs crypto -createZone -keyName key0 -path /keyZone
> {code}
> Error message poped out as below:
> ranger-0.7.0-SNAPSHOT-kms/ews/logs/kms.log
> {code:java}
> 2017-01-04 14:27:13,256 ERROR [webservices-driver] - Servlet.service() for 
> servlet [webservices-driver] in context with path [/kms] threw exception
> java.lang.NullPointerException
>   at 
> org.apache.http.client.utils.URLEncodedUtils.parse(URLEncodedUtils.java:235)
>   at 
> org.apache.hadoop.security.token.delegation.web.ServletUtils.getParameter(ServletUtils.java:48)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:171)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
>   at 
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:436)
>   at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)
>   at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
>   at 
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I analyzed the reasons,Ranger-Kms relies on httpclient version 4.5.1, there 
> is a bug, as follows:
> org/apache/http/client/utils/URLEncodedUtils.java
> {code:java}
> public static List parse(String s, Charset charset) {
>   CharArrayBuffer buffer = new CharArrayBuffer(s.length());
>   buffer.append(s);
>   return parse(buffer, charset, new char[]{'&', ';'});
> }
> {code}
> When the parameter 's' is null, it will pop out NullPointException.
> And in httpclient version 4.5.3, there is no problem. the new code is as 
> follows:
> {code:java}
> public static List parse(final String s, final Charset 
> charset) {
> if (s == null) {
> return Collections.emptyList();
> }
> final CharArrayBuffer buffer = new CharArrayBuffer(s.length());
> buffer.append(s);
> return parse(buffer, charset, QP_SEP_A, QP_SEP_S);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)