[jira] [Created] (SENTRY-1556) Simplify privilege cleaning

2016-12-02 Thread Alexander Kolbasov (JIRA)
Alexander Kolbasov created SENTRY-1556:
--

 Summary: Simplify privilege cleaning
 Key: SENTRY-1556
 URL: https://issues.apache.org/jira/browse/SENTRY-1556
 Project: Sentry
  Issue Type: Improvement
  Components: Sentry
Affects Versions: 1.8.0, sentry-ha-redesign
Reporter: Alexander Kolbasov
Priority: Minor


The SentryStore class has a privCleaner that cleans up orphaned privileges. 
Currently cleaning is happening after 50 notification requests are sent and it 
uses locking to synchronize.

I think the whole thing can be simplified:

1) We should consider whether it is possible to clean up a privilege simply 
when we see that there are no roles associated with it. In this case we do not 
need this at all.

2) We can simply run a periodic job to clean up orphaned privileges and groups 
(which are not cleaned up at all now).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SENTRY-1525) Provide script to run Sentry directly from the repo

2016-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717526#comment-15717526
 ] 

Hadoop QA commented on SENTRY-1525:
---

Here are the results of testing the latest attachment
https://issues.apache.org/jira/secure/attachment/12841614/SENTRY-1525.003.patch 
against master.

{color:green}Overall:{color} +1 all checks pass

{color:green}SUCCESS:{color} all tests passed

Console output: 
https://builds.apache.org/job/PreCommit-SENTRY-Build/2182/console

This message is automatically generated.

> Provide script to run Sentry directly from the repo
> ---
>
> Key: SENTRY-1525
> URL: https://issues.apache.org/jira/browse/SENTRY-1525
> Project: Sentry
>  Issue Type: Improvement
>  Components: Sentry
>Affects Versions: 1.8.0, sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1525.001.patch, SENTRY-1525.002.patch, 
> SENTRY-1525.003.patch
>
>
> Sentry repo includes a bin/sentry script that runs Sentry binary. It relies 
> on hadoop  being installed somewhere (in fact, it relies on hadoop binary in 
> $HADOOP_HOME/bin/hadoop
> We should have a way to run sentry without hadoop binary using maven instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SENTRY-1515) Cleanup exception handling in SentryStore

2016-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717496#comment-15717496
 ] 

Hadoop QA commented on SENTRY-1515:
---

Here are the results of testing the latest attachment
https://issues.apache.org/jira/secure/attachment/12841612/SENTRY-1515.001.patch 
against master.

{color:green}Overall:{color} +1 all checks pass

{color:green}SUCCESS:{color} all tests passed

Console output: 
https://builds.apache.org/job/PreCommit-SENTRY-Build/2181/console

This message is automatically generated.

> Cleanup exception handling in SentryStore
> -
>
> Key: SENTRY-1515
> URL: https://issues.apache.org/jira/browse/SENTRY-1515
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1515.001.patch
>
>
> The changes to SENTRY-1422 and SENTRY-1512 changed the semantics of several 
> API calls:
> - hasAnyServerPrivileges
> - getMSentryPrivileges
> - getMSentryPrivilegesByAuth
> - getRoleNamesForGroups
> - retrieveFullPrivilegeImage
> - retrieveFullRoleImage
> - retrieveFullPathsImage
> - getAllRoleNames
> Previously they were not marked as throwing Exception, but they still could 
> do it. With the change they now ignore exceptions and just log them which may 
> not be the right thing to do. 
> Instead they should be marked as throwing exceptions which has consequence 
> for broader APIs which should be marked as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1525) Provide script to run Sentry directly from the repo

2016-12-02 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated SENTRY-1525:
---
Status: Patch Available  (was: In Progress)

Addressed code review comments

> Provide script to run Sentry directly from the repo
> ---
>
> Key: SENTRY-1525
> URL: https://issues.apache.org/jira/browse/SENTRY-1525
> Project: Sentry
>  Issue Type: Improvement
>  Components: Sentry
>Affects Versions: 1.8.0, sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1525.001.patch, SENTRY-1525.002.patch, 
> SENTRY-1525.003.patch
>
>
> Sentry repo includes a bin/sentry script that runs Sentry binary. It relies 
> on hadoop  being installed somewhere (in fact, it relies on hadoop binary in 
> $HADOOP_HOME/bin/hadoop
> We should have a way to run sentry without hadoop binary using maven instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1525) Provide script to run Sentry directly from the repo

2016-12-02 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated SENTRY-1525:
---
Attachment: SENTRY-1525.003.patch

> Provide script to run Sentry directly from the repo
> ---
>
> Key: SENTRY-1525
> URL: https://issues.apache.org/jira/browse/SENTRY-1525
> Project: Sentry
>  Issue Type: Improvement
>  Components: Sentry
>Affects Versions: 1.8.0, sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1525.001.patch, SENTRY-1525.002.patch, 
> SENTRY-1525.003.patch
>
>
> Sentry repo includes a bin/sentry script that runs Sentry binary. It relies 
> on hadoop  being installed somewhere (in fact, it relies on hadoop binary in 
> $HADOOP_HOME/bin/hadoop
> We should have a way to run sentry without hadoop binary using maven instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1525) Provide script to run Sentry directly from the repo

2016-12-02 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated SENTRY-1525:
---
Status: In Progress  (was: Patch Available)

> Provide script to run Sentry directly from the repo
> ---
>
> Key: SENTRY-1525
> URL: https://issues.apache.org/jira/browse/SENTRY-1525
> Project: Sentry
>  Issue Type: Improvement
>  Components: Sentry
>Affects Versions: 1.8.0, sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1525.001.patch, SENTRY-1525.002.patch, 
> SENTRY-1525.003.patch
>
>
> Sentry repo includes a bin/sentry script that runs Sentry binary. It relies 
> on hadoop  being installed somewhere (in fact, it relies on hadoop binary in 
> $HADOOP_HOME/bin/hadoop
> We should have a way to run sentry without hadoop binary using maven instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SENTRY-1515) Cleanup exception handling in SentryStore

2016-12-02 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717378#comment-15717378
 ] 

Alexander Kolbasov commented on SENTRY-1515:


As it turned out there are a few other functions which silently ignored 
exceptions, the patch addresses all of them.

> Cleanup exception handling in SentryStore
> -
>
> Key: SENTRY-1515
> URL: https://issues.apache.org/jira/browse/SENTRY-1515
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1515.001.patch
>
>
> The changes to SENTRY-1422 and SENTRY-1512 changed the semantics of several 
> API calls:
> - hasAnyServerPrivileges
> - getMSentryPrivileges
> - getMSentryPrivilegesByAuth
> - getRoleNamesForGroups
> - retrieveFullPrivilegeImage
> - retrieveFullRoleImage
> - retrieveFullPathsImage
> - getAllRoleNames
> Previously they were not marked as throwing Exception, but they still could 
> do it. With the change they now ignore exceptions and just log them which may 
> not be the right thing to do. 
> Instead they should be marked as throwing exceptions which has consequence 
> for broader APIs which should be marked as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1515) Cleanup exception handling in SentryStore

2016-12-02 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated SENTRY-1515:
---
Status: Patch Available  (was: In Progress)

> Cleanup exception handling in SentryStore
> -
>
> Key: SENTRY-1515
> URL: https://issues.apache.org/jira/browse/SENTRY-1515
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1515.001.patch
>
>
> The changes to SENTRY-1422 and SENTRY-1512 changed the semantics of several 
> API calls:
> - hasAnyServerPrivileges
> - getMSentryPrivileges
> - getMSentryPrivilegesByAuth
> - getRoleNamesForGroups
> - retrieveFullPrivilegeImage
> - retrieveFullRoleImage
> - retrieveFullPathsImage
> - getAllRoleNames
> Previously they were not marked as throwing Exception, but they still could 
> do it. With the change they now ignore exceptions and just log them which may 
> not be the right thing to do. 
> Instead they should be marked as throwing exceptions which has consequence 
> for broader APIs which should be marked as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1515) Cleanup exception handling in SentryStore

2016-12-02 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated SENTRY-1515:
---
Attachment: SENTRY-1515.001.patch

> Cleanup exception handling in SentryStore
> -
>
> Key: SENTRY-1515
> URL: https://issues.apache.org/jira/browse/SENTRY-1515
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Attachments: SENTRY-1515.001.patch
>
>
> The changes to SENTRY-1422 and SENTRY-1512 changed the semantics of several 
> API calls:
> - hasAnyServerPrivileges
> - getMSentryPrivileges
> - getMSentryPrivilegesByAuth
> - getRoleNamesForGroups
> - retrieveFullPrivilegeImage
> - retrieveFullRoleImage
> - retrieveFullPathsImage
> - getAllRoleNames
> Previously they were not marked as throwing Exception, but they still could 
> do it. With the change they now ignore exceptions and just log them which may 
> not be the right thing to do. 
> Instead they should be marked as throwing exceptions which has consequence 
> for broader APIs which should be marked as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SENTRY-1515) Cleanup exception handling in SentryStore

2016-12-02 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov reassigned SENTRY-1515:
--

Assignee: Alexander Kolbasov

> Cleanup exception handling in SentryStore
> -
>
> Key: SENTRY-1515
> URL: https://issues.apache.org/jira/browse/SENTRY-1515
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>
> The changes to SENTRY-1422 and SENTRY-1512 changed the semantics of several 
> API calls:
> - hasAnyServerPrivileges
> - getMSentryPrivileges
> - getMSentryPrivilegesByAuth
> - getRoleNamesForGroups
> - retrieveFullPrivilegeImage
> - retrieveFullRoleImage
> - retrieveFullPathsImage
> - getAllRoleNames
> Previously they were not marked as throwing Exception, but they still could 
> do it. With the change they now ignore exceptions and just log them which may 
> not be the right thing to do. 
> Instead they should be marked as throwing exceptions which has consequence 
> for broader APIs which should be marked as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717093#comment-15717093
 ] 

Hadoop QA commented on SENTRY-1377:
---

Here are the results of testing the latest attachment
https://issues.apache.org/jira/secure/attachment/12841592/SENTRY-1377.001.patch 
against master.

{color:green}Overall:{color} +1 all checks pass

{color:green}SUCCESS:{color} all tests passed

Console output: 
https://builds.apache.org/job/PreCommit-SENTRY-Build/2180/console

This message is automatically generated.

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so tests can skip close() 
> calls if fail in the middle.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> a single close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so tests can skip close() calls 
if fail in the middle.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so test can cause skipping the 
close() calls.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so tests can skip close() calls 
if fail in the middle.
b) many methods re-open Connection and Statement multiple times, yet provide a 
single close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so tests can skip close() calls 
if fail in the middle.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, 

[jira] [Comment Edited] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716913#comment-15716913
 ] 

Vadim Spector edited comment on SENTRY-1377 at 12/3/16 12:20 AM:
-

Resuming work on this JIRA. Due to massive refactoring of HDFS integration 
tests, starting from the beginning. Deleting old patches.

Current review board link: https://reviews.apache.org/r/49842/diff/4/




was (Author: vspec...@gmail.com):
Resuming work on this JIRA. Due to massive refactoring of HDFS integration 
tests, starting from the beginning. Deleting old patches.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so test can cause skipping 
> the close() calls.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> asingle close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.001.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so test can cause skipping 
> the close() calls.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> asingle close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716913#comment-15716913
 ] 

Vadim Spector commented on SENTRY-1377:
---

Resuming work on this JIRA. Due to massive refactoring of HDFS integration 
tests, starting from the beginning. Deleting old patches.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so test can cause skipping 
> the close() calls.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> asingle close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so test can cause skipping the 
close() calls.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally { safeClose(conn, stmt); }

private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.001.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SENTRY-1548) Setting GrantOption to UNSET upsets Sentry

2016-12-02 Thread kalyan kumar kalvagadda (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716616#comment-15716616
 ] 

kalyan kumar kalvagadda commented on SENTRY-1548:
-

I'm not sure if unset option is valid anymore. If it is not valid anymore, we 
need to identify it as early as possible and return appropriate error to the 
user.


> Setting GrantOption to UNSET upsets Sentry
> --
>
> Key: SENTRY-1548
> URL: https://issues.apache.org/jira/browse/SENTRY-1548
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Reporter: Alexander Kolbasov
>Assignee: kalyan kumar kalvagadda
>Priority: Minor
>  Labels: bite-sized
>
> If we send a Thrift request to sentry (using regular api) with GrantOption 
> set to UNSET (-1) we get the following error:
> {code}
> TransactionManager.executeTransactionWithRetry(TransactionManager.java:102)] 
> The transaction has reac
> hed max retry number, will not retry again.
> javax.jdo.JDODataStoreException: Insert of object 
> "org.apache.sentry.provider.db.service.model.MSentryPrivilege@6bbfd4c9" using 
> statement "INSERT INTO `SENTRY_DB_PRIVILEGE` 
> (`DB_PRIVILEGE_ID`,`SERVER_NAME`,`WITH_GRANT_OPTION`,`CREATE_TIME`,`TABLE_NAME`,`URI`,`ACTION`,`COLUMN_NAME`,`DB_NAME`,`PRIVILEGE_SCOPE`)
>  VALUES (?,?,?,?,?,?,?,?,?,?)" failed : Column 'WITH_GRANT_OPTION' cannot be 
> null
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
> at 
> org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)
> at 
> org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivilegeCore(SentryStore.java:438)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore.access$500(SentryStore.java:95)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore$8.execute(SentryStore.java:374)
> at 
> org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransaction(TransactionManager.java:72)
> at 
> org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransactionWithRetry(TransactionManager.java:93)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivileges(SentryStore.java:367)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.alter_sentry_role_grant_privilege(SentryPolicyStoreProcessor.java:280)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$alter_sentry_role_grant_privilege.getResult(SentryPolicyService.java:1237)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$alter_sentry_role_grant_privilege.getResult(SentryPolicyService.java:1222)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryProcessorWrapper.process(SentryProcessorWrapper.java:35)
> at 
> org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> NestedThrowablesStackTrace:
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'WITH_GRANT_OPTION' cannot be null
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
> at com.mysql.jdbc.Util.getInstance(Util.java:387)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:934)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3966)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3902)
> at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2526)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2673)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2549)
> at 
> 

[jira] [Commented] (SENTRY-1548) Setting GrantOption to UNSET upsets Sentry

2016-12-02 Thread kalyan kumar kalvagadda (JIRA)

[ 
https://issues.apache.org/jira/browse/SENTRY-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716605#comment-15716605
 ] 

kalyan kumar kalvagadda commented on SENTRY-1548:
-

*Root Cause*
When unset option is provided, sentry is trying to insert NULL to column 
"WITH_GRANT_OPTION" that has a NOT constraint on that column. This is root 
cause of the error. 


> Setting GrantOption to UNSET upsets Sentry
> --
>
> Key: SENTRY-1548
> URL: https://issues.apache.org/jira/browse/SENTRY-1548
> Project: Sentry
>  Issue Type: Bug
>  Components: Sentry
>Reporter: Alexander Kolbasov
>Assignee: kalyan kumar kalvagadda
>Priority: Minor
>  Labels: bite-sized
>
> If we send a Thrift request to sentry (using regular api) with GrantOption 
> set to UNSET (-1) we get the following error:
> {code}
> TransactionManager.executeTransactionWithRetry(TransactionManager.java:102)] 
> The transaction has reac
> hed max retry number, will not retry again.
> javax.jdo.JDODataStoreException: Insert of object 
> "org.apache.sentry.provider.db.service.model.MSentryPrivilege@6bbfd4c9" using 
> statement "INSERT INTO `SENTRY_DB_PRIVILEGE` 
> (`DB_PRIVILEGE_ID`,`SERVER_NAME`,`WITH_GRANT_OPTION`,`CREATE_TIME`,`TABLE_NAME`,`URI`,`ACTION`,`COLUMN_NAME`,`DB_NAME`,`PRIVILEGE_SCOPE`)
>  VALUES (?,?,?,?,?,?,?,?,?,?)" failed : Column 'WITH_GRANT_OPTION' cannot be 
> null
> at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
> at 
> org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)
> at 
> org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivilegeCore(SentryStore.java:438)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore.access$500(SentryStore.java:95)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore$8.execute(SentryStore.java:374)
> at 
> org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransaction(TransactionManager.java:72)
> at 
> org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransactionWithRetry(TransactionManager.java:93)
> at 
> org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivileges(SentryStore.java:367)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.alter_sentry_role_grant_privilege(SentryPolicyStoreProcessor.java:280)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$alter_sentry_role_grant_privilege.getResult(SentryPolicyService.java:1237)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$alter_sentry_role_grant_privilege.getResult(SentryPolicyService.java:1222)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.sentry.provider.db.service.thrift.SentryProcessorWrapper.process(SentryProcessorWrapper.java:35)
> at 
> org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> NestedThrowablesStackTrace:
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Column 'WITH_GRANT_OPTION' cannot be null
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
> at com.mysql.jdbc.Util.getInstance(Util.java:387)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:934)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3966)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3902)
> at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2526)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2673)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2549)
>

[jira] [Updated] (SENTRY-1554) Port SENTRY-1518 to sentry-ha-redesign

2016-12-02 Thread Hao Hao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Hao updated SENTRY-1554:

Fix Version/s: (was: 1.8.0)
   sentry-ha-redesign

> Port SENTRY-1518 to sentry-ha-redesign
> --
>
> Key: SENTRY-1554
> URL: https://issues.apache.org/jira/browse/SENTRY-1554
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Fix For: sentry-ha-redesign
>
> Attachments: SENTRY-1518.001-sentry-ha-redesign.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1554) Port SENTRY-1518 to sentry-ha-redesign

2016-12-02 Thread Hao Hao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Hao updated SENTRY-1554:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Port SENTRY-1518 to sentry-ha-redesign
> --
>
> Key: SENTRY-1554
> URL: https://issues.apache.org/jira/browse/SENTRY-1554
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Fix For: sentry-ha-redesign
>
> Attachments: SENTRY-1518.001-sentry-ha-redesign.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1518) Add metrics for SentryStore transactions

2016-12-02 Thread Hao Hao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Hao updated SENTRY-1518:

Fix Version/s: sentry-ha-redesign

> Add metrics for SentryStore transactions
> 
>
> Key: SENTRY-1518
> URL: https://issues.apache.org/jira/browse/SENTRY-1518
> Project: Sentry
>  Issue Type: Improvement
>  Components: Sentry
>Affects Versions: 1.8.0, sentry-ha-redesign
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
> Fix For: 1.8.0, sentry-ha-redesign
>
> Attachments: SENTRY-1517.002.patch, SENTRY-1518.001.patch, 
> SENTRY-1518.002.patch
>
>
> Now that we have SENTRY-1422 integrated and all SentryStore transactions go 
> through the same transaction manager we should add metrics that keep track of 
> all SentryStore exceptions we are getting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SENTRY-1555) Spark SQL "Show databases" NullPointerException while Sentry turned on

2016-12-02 Thread zhangqw (JIRA)
zhangqw created SENTRY-1555:
---

 Summary: Spark SQL "Show databases"  NullPointerException while 
Sentry turned on
 Key: SENTRY-1555
 URL: https://issues.apache.org/jira/browse/SENTRY-1555
 Project: Sentry
  Issue Type: Bug
  Components: Sentry
Affects Versions: 1.5.1
 Environment: CentOS 6.5 / Hive 1.1.0 / Spark2.0.0
Reporter: zhangqw


I've traced into source code, and it seems that  of 
Sentry not set when spark sql started a session. This operation should be done 
in org.apache.sentry.binding.hive.HiveAuthzBindingSessionHook which is not 
called in spark sql.
Edit: I copyed hive-site.xml(which turns on Sentry) and all sentry jars into 
spark's classpath.
Here is the stack:
===
16/11/30 10:54:50 WARN SentryMetaStoreFilterHook: Error getting DB list
java.lang.NullPointerException
at java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:162)
at 
org.apache.sentry.provider.common.HadoopGroupMappingService.getGroups(HadoopGroupMappingService.java:60)
at 
org.apache.sentry.binding.hive.HiveAuthzBindingHook.getHiveBindingWithPrivilegeCache(HiveAuthzBindingHook.java:956)
at 
org.apache.sentry.binding.hive.HiveAuthzBindingHook.filterShowDatabases(HiveAuthzBindingHook.java:826)
at 
org.apache.sentry.binding.metastore.SentryMetaStoreFilterHook.filterDb(SentryMetaStoreFilterHook.java:131)
at 
org.apache.sentry.binding.metastore.SentryMetaStoreFilterHook.filterDatabases(SentryMetaStoreFilterHook.java:59)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1031)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy38.getAllDatabases(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1234)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:166)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.(HiveClientImpl.scala:170)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at 
org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at 
org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at 
org.apache.spark.sql.hive.HiveSessionState.metadataHive$lzycompute(HiveSessionState.scala:43)
at 
org.apache.spark.sql.hive.HiveSessionState.metadataHive(HiveSessionState.scala:43)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:62)
at 
org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:84)
at 
org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)