[jira] [Updated] (HBASE-5823) Hbck should be able to print help
[ https://issues.apache.org/jira/browse/HBASE-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5823: - Attachment: hbase-hbck.patch Simple patch attached. Hbck should be able to print help - Key: HBASE-5823 URL: https://issues.apache.org/jira/browse/HBASE-5823 Project: HBase Issue Type: Improvement Affects Versions: 0.92.1, 0.96.0, 0.94.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Minor Attachments: hbase-hbck.patch bin/hbase hbck -h and -help should print the help message. It used to print help when unrecognized options are passed. We can backport this to 0.92/0.94 branches as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5760) Unit tests should write only under /target
[ https://issues.apache.org/jira/browse/HBASE-5760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5760: - Attachment: HBASE-5760_v1.patch Attaching the patch from the review. Thanks Stack for the review. I'll take a look at HBASE-5747 shortly. Unit tests should write only under /target -- Key: HBASE-5760 URL: https://issues.apache.org/jira/browse/HBASE-5760 Project: HBase Issue Type: Improvement Components: test Affects Versions: 0.96.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Minor Attachments: HBASE-5760_v1.patch Some of the unit test runs result in files under $hbase_home/test, $hbase_home/build, or $hbase_home/. We should ensure that all tests use target as their data location. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5760) Unit tests should write only under /target
[ https://issues.apache.org/jira/browse/HBASE-5760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5760: - Status: Patch Available (was: Open) Unit tests should write only under /target -- Key: HBASE-5760 URL: https://issues.apache.org/jira/browse/HBASE-5760 Project: HBase Issue Type: Improvement Components: test Affects Versions: 0.96.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Minor Attachments: HBASE-5760_v1.patch Some of the unit test runs result in files under $hbase_home/test, $hbase_home/build, or $hbase_home/. We should ensure that all tests use target as their data location. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5671) hbase.metrics.showTableName should be true by default
[ https://issues.apache.org/jira/browse/HBASE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5671: - Attachment: HBASE-5671_v1.patch Attaching simple patch. hbase.metrics.showTableName should be true by default - Key: HBASE-5671 URL: https://issues.apache.org/jira/browse/HBASE-5671 Project: HBase Issue Type: Improvement Components: metrics Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Trivial Attachments: HBASE-5671_v1.patch HBASE-4768 added per-cf metrics and a new configuration option hbase.metrics.showTableName. We should switch the conf option to true by default, since it is not intuitive (at least to me) to aggregate per-cf across tables by default, and it seems confusing to report on cf's without table names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush
[ https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5623: - Attachment: HBASE-5623_v4.patch While running the unit test, I also noticed that in some cases, the syncer() still holds the old writer reference, and holding the updateLock does not guarantee that the other thread's writer pointer is updated to the nextWriter. I had changed Writer to be volatile but that did not help either. So, I use AtomicReference for managing the Writer pointer. Lars, I have merged my patch with yours for the second-tries, and modified Stack's unit test so that it fails nearly everytime on trunk, and once in 10 runs with patch v2. Race condition when rolling the HLog and hlogFlush -- Key: HBASE-5623 URL: https://issues.apache.org/jira/browse/HBASE-5623 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Critical Fix For: 0.94.0 Attachments: 5623.txt, 5623v2.txt, HBASE-5623_v0.patch, HBASE-5623_v4.patch When doing a ycsb test with a large number of handlers (regionserver.handler.count=60), I get the following exceptions: {code} Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152) at $Proxy1.multi(Unknown Source) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214) {code} and {code} java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279) at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) {code} It seems the root cause of the issue is that we open a new log writer and close the old one at HLog#rollWriter() holding the updateLock, but the other threads doing syncer() calls {code} logSyncerThread.hlogFlush(this.writer); {code} without holding the updateLock. LogSyncer only synchronizes against concurrent appends
[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush
[ https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5623: - Attachment: HBASE-5623_v5.patch @Ted, Forgot to git add. Also applied your suggestion. Race condition when rolling the HLog and hlogFlush -- Key: HBASE-5623 URL: https://issues.apache.org/jira/browse/HBASE-5623 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Critical Fix For: 0.94.0 Attachments: 5623.txt, 5623v2.txt, HBASE-5623_v0.patch, HBASE-5623_v4.patch, HBASE-5623_v5.patch When doing a ycsb test with a large number of handlers (regionserver.handler.count=60), I get the following exceptions: {code} Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152) at $Proxy1.multi(Unknown Source) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214) {code} and {code} java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279) at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) {code} It seems the root cause of the issue is that we open a new log writer and close the old one at HLog#rollWriter() holding the updateLock, but the other threads doing syncer() calls {code} logSyncerThread.hlogFlush(this.writer); {code} without holding the updateLock. LogSyncer only synchronizes against concurrent appends and flush(), but not on the passed writer, which can be closed already by rollWriter(). In this case, since SequenceFile#Writer.close() sets it's out field as null, we get the NPE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see:
[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush
[ https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5623: - Attachment: HBASE-5623_v6-alt.patch Thanks Lars for the patch. I retried again based on your comment, and was not able to reproduce the condition necessinating AtomicRef's. So, I just added one more check for in the unit test, and added a volatile to logRollRunning to your patch (5623-suggestion). I have tested this with the unit test, but I am not able to test it with the ycsb cluster. I'll do that Monday. Until then, we can go with this patch, if you are comfortable. Race condition when rolling the HLog and hlogFlush -- Key: HBASE-5623 URL: https://issues.apache.org/jira/browse/HBASE-5623 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Priority: Critical Fix For: 0.94.0 Attachments: 5623-suggestion.txt, 5623.txt, 5623v2.txt, HBASE-5623_v0.patch, HBASE-5623_v4.patch, HBASE-5623_v5.patch, HBASE-5623_v6-alt.patch When doing a ycsb test with a large number of handlers (regionserver.handler.count=60), I get the following exceptions: {code} Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152) at $Proxy1.multi(Unknown Source) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214) {code} and {code} java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279) at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) {code} It seems the root cause of the issue is that we open a new log writer and close the old one at HLog#rollWriter() holding the updateLock, but the other threads doing syncer() calls {code} logSyncerThread.hlogFlush(this.writer); {code} without holding the updateLock. LogSyncer only synchronizes against concurrent appends and flush(), but
[jira] [Updated] (HBASE-5623) Race condition when rolling the HLog and hlogFlush
[ https://issues.apache.org/jira/browse/HBASE-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5623: - Attachment: HBASE-5623_v0.patch Attaching a simple patch, which seems to solve the problem. This does wrap the call logSyncerThread.hlogFlush(this.writer) to obtain the updateLock. Otherwise, I guess we can also lose wal edits, since the writer cannot append the pendingWrites. Race condition when rolling the HLog and hlogFlush -- Key: HBASE-5623 URL: https://issues.apache.org/jira/browse/HBASE-5623 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HBASE-5623_v0.patch When doing a ycsb test with a large number of handlers (regionserver.handler.count=60), I get the following exceptions: {code} Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.getLength(SequenceFile.java:1099) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.getLength(SequenceFileLogWriter.java:314) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1291) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1388) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:920) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:152) at $Proxy1.multi(Unknown Source) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1691) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1689) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:214) {code} and {code} java.lang.NullPointerException at org.apache.hadoop.io.SequenceFile$Writer.checkAndWriteSync(SequenceFile.java:1026) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1068) at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1035) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.append(SequenceFileLogWriter.java:279) at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.hlogFlush(HLog.java:1237) at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1271) at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1391) at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:2192) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1985) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3400) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:366) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1351) {code} It seems the root cause of the issue is that we open a new log writer and close the old one at HLog#rollWriter() holding the updateLock, but the other threads doing syncer() calls {code} logSyncerThread.hlogFlush(this.writer); {code} without holding the updateLock. LogSyncer only synchronizes against concurrent appends and flush(), but not on the passed writer, which can be closed already by rollWriter(). In this case, since SequenceFile#Writer.close() sets it's out field as null, we get the NPE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators:
[jira] [Updated] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API
[ https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5371: - Attachment: HBASE-5371-addendum_v1.patch From my testing and understanding from the code, the version defined by the coprocossor is not checked in the invocation code path. So the version defined in AccessControllerProtocol is not relevant anyway. We can file a new jira for version checking, but since we are going to work on wire compatibility for coprocessors. let's wait on that for now. I am attaching a patch which decreases the version back to 1. I have tested adding a new method to the client, and invoking the old server, and the method invocation throws NoSuchMethodException wrapped around RetriesExhaustedException. Applying this patch to trunk, and pushing both of these to 0.92.1 seems fine to me. wdyt? {code} org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Tue Feb 21 18:04:37 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:38 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:39 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:40 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:42 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:44 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:48 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:04:52 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:05:00 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() Tue Feb 21 18:05:16 PST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@630f41e9, java.io.IOException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hbase.security.access.AccessControllerProtocol.shinyNewMethod() at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183) at org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79) at $Proxy2.shinyNewMethod(Unknown Source) at org.apache.hadoop.hbase.NewMethodTest.main(NewMethodTest.java:36) {code} @Andrew an alternate strategy would be for the client to actually perform an operation and see whether if it fails or not. But to do that, the client has to create a dummy table, or put a dummy value, etc, which seems very dangerous. Throwing NoSuchMethod seems more appropriate to me, if the server does not suppport the call. Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API Key: HBASE-5371 URL: https://issues.apache.org/jira/browse/HBASE-5371 Project: HBase Issue Type: Sub-task Components: security Affects Versions: 0.94.0, 0.92.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.94.0 Attachments: HBASE-5371-addendum_v1.patch, HBASE-5371_v2.patch, HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch We need to introduce something like AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so that clients can check access rights before carrying out the operations. We need this kind of
[jira] [Updated] (HBASE-5400) Some tests does not have annotations for (Small|Medium|Large)Tests
[ https://issues.apache.org/jira/browse/HBASE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5400: - Status: Patch Available (was: Open) Some tests does not have annotations for (Small|Medium|Large)Tests --- Key: HBASE-5400 URL: https://issues.apache.org/jira/browse/HBASE-5400 Project: HBase Issue Type: Bug Components: security, test Affects Versions: 0.94.0, 0.92.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HBASE-5400_v1.patch These tests does not have annotations, and are not picked up by -PrunAllTests {code} security/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessControlFilter.java security/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java security/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java security/src/test/java/org/apache/hadoop/hbase/security/access/TestZKPermissionsWatcher.java security/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java security/src/test/java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java {code} We can also backport this to 0.92.1, since development will continue on 0.92 branch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5400) Some tests does not have annotations for (Small|Medium|Large)Tests
[ https://issues.apache.org/jira/browse/HBASE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5400: - Attachment: HBASE-5400_v1.patch Attaching a patch. /security tests are annotated with LargeTests, and TestHFileBlockEncoder with SmallTests since it runs 1sec. Some tests does not have annotations for (Small|Medium|Large)Tests --- Key: HBASE-5400 URL: https://issues.apache.org/jira/browse/HBASE-5400 Project: HBase Issue Type: Bug Components: security, test Affects Versions: 0.94.0, 0.92.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HBASE-5400_v1.patch These tests does not have annotations, and are not picked up by -PrunAllTests {code} security/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessControlFilter.java security/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java security/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java security/src/test/java/org/apache/hadoop/hbase/security/access/TestZKPermissionsWatcher.java security/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java security/src/test/java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java {code} We can also backport this to 0.92.1, since development will continue on 0.92 branch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API
[ https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5371: - Status: Patch Available (was: Open) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API Key: HBASE-5371 URL: https://issues.apache.org/jira/browse/HBASE-5371 Project: HBase Issue Type: Sub-task Components: security Affects Versions: 0.94.0, 0.92.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HBASE-5371_v2.patch, HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch We need to introduce something like AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so that clients can check access rights before carrying out the operations. We need this kind of operation for HCATALOG-245, which introduces authorization providers for hbase over hcat. We cannot use getUserPermissions() since it requires ADMIN permissions on the global/table level. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5341) Push the security 0.92 profile to maven repo
[ https://issues.apache.org/jira/browse/HBASE-5341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5341: - Description: Hbase 0.92.0 was released with two artifacts, plain and security. The security code is built with -Psecurity. There are two tarballs, but only the plain jar in maven repo at repository.a.o. I see no reason to do a separate artifact for the security related code, since 0.92 already depends on secure Hadoop 1.0.0, and all of the security related code is not loaded by default. In this issue, I propose, we merge the code under /security to src/ and remove the maven profile. Edit: after some discussion, and the plans for modularizing the build to include a security module, we changed the issue description to push the security jars in 0.92.1 to maven repo. was: Hbase 0.92.0 was released with two artifacts, plain and security. The security code is built with -Psecurity. There are two tarballs, but only the plain jar in maven repo at repository.a.o. I see no reason to do a separate artifact for the security related code, since 0.92 already depends on secure Hadoop 1.0.0, and all of the security related code is not loaded by default. In this issue, I propose, we merge the code under /security to src/ and remove the maven profile. Summary: Push the security 0.92 profile to maven repo (was: HBase build artifact should include security code by defult ) I have recycled this issue, and changed the title. Push the security 0.92 profile to maven repo Key: HBASE-5341 URL: https://issues.apache.org/jira/browse/HBASE-5341 Project: HBase Issue Type: Improvement Components: build, security Affects Versions: 0.94.0, 0.92.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.92.1 Hbase 0.92.0 was released with two artifacts, plain and security. The security code is built with -Psecurity. There are two tarballs, but only the plain jar in maven repo at repository.a.o. I see no reason to do a separate artifact for the security related code, since 0.92 already depends on secure Hadoop 1.0.0, and all of the security related code is not loaded by default. In this issue, I propose, we merge the code under /security to src/ and remove the maven profile. Edit: after some discussion, and the plans for modularizing the build to include a security module, we changed the issue description to push the security jars in 0.92.1 to maven repo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5358) HBaseObjectWritable should be able to serialize generic arrays not defined previously
[ https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5358: - Attachment: HBASE-5358_v3.patch Attaching the latest version of the patch from review. Incorporates a trivial javadoc fix as suggested. HBaseObjectWritable should be able to serialize generic arrays not defined previously - Key: HBASE-5358 URL: https://issues.apache.org/jira/browse/HBASE-5358 Project: HBase Issue Type: Improvement Components: coprocessors, io Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HBASE-5358_v3.patch HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where A extends Writable. This becomes an issue for example when adding a coprocessor method which takes A[] (see HBASE-5352). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5358) HBaseObjectWritable should be able to serialize generic arrays not defined previously
[ https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5358: - Affects Version/s: 0.94.0 Status: Patch Available (was: Open) HBaseObjectWritable should be able to serialize generic arrays not defined previously - Key: HBASE-5358 URL: https://issues.apache.org/jira/browse/HBASE-5358 Project: HBase Issue Type: Improvement Components: coprocessors, io Affects Versions: 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HBASE-5358_v3.patch HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where A extends Writable. This becomes an issue for example when adding a coprocessor method which takes A[] (see HBASE-5352). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5341) HBase build artifact should include security code by defult
[ https://issues.apache.org/jira/browse/HBASE-5341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-5341: - Component/s: security build HBase build artifact should include security code by defult Key: HBASE-5341 URL: https://issues.apache.org/jira/browse/HBASE-5341 Project: HBase Issue Type: Improvement Components: build, security Affects Versions: 0.94.0, 0.92.1 Reporter: Enis Soztutar Assignee: Enis Soztutar Hbase 0.92.0 was released with two artifacts, plain and security. The security code is built with -Psecurity. There are two tarballs, but only the plain jar in maven repo at repository.a.o. I see no reason to do a separate artifact for the security related code, since 0.92 already depends on secure Hadoop 1.0.0, and all of the security related code is not loaded by default. In this issue, I propose, we merge the code under /security to src/ and remove the maven profile. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira