[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again
[ https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961655#comment-13961655 ] Masatake Iwasaki commented on HBASE-10886: -- From the perspective of runtime dependency of the binary distribution of HBase, all of the jars htrace-zipkin requires are provided (in lib dir). In order to keep compile time dependency as clean as possible, I think [providing (the way to build) the jar with dependencies|https://github.com/cloudera/htrace/pull/25] for manual setup is the option. I would like to make setup of htrace-zipkin anyway easier because it is currently the only viable span receiver on distributed environment. add htrace-zipkin to the runtime dependencies again --- Key: HBASE-10886 URL: https://issues.apache.org/jira/browse/HBASE-10886 Project: HBase Issue Type: Improvement Components: build, documentation Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10886-0.patch, HBASE-10886-1.patch Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all of the depencencies of htrace-zipkin is bundled with HBase now, it is good to add it for the ease of use. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters
[ https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961657#comment-13961657 ] Hadoop QA commented on HBASE-10902: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638938/HBASE-10902-v1-trunk.patch against trunk revision . ATTACHMENT ID: 12638938 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9208//console This message is automatically generated. Make Secure Bulk Load work across remote secure clusters Key: HBASE-10902 URL: https://issues.apache.org/jira/browse/HBASE-10902 Project: HBase Issue Type: Improvement Affects Versions: 0.96.1 Reporter: Jerry He Assignee: Jerry He Fix For: 0.99.0 Attachments: HBASE-10902-v0-0.96.patch, HBASE-10902-v1-trunk.patch Two secure clusters, both with kerberos enabled. Run bulk load on one cluster to load files from another cluster. biadmin@hdtest249:~ hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c TestTable_rr Bulk load failed. In the region server log: {code} 2014-04-02 20:04:56,361 ERROR org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to complete bulk load java.lang.IllegalArgumentException: Wrong FS: hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7, expected: hdfs://hdtest249.svl.ibm.com:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at
[jira] [Commented] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters
[ https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961656#comment-13961656 ] Hadoop QA commented on HBASE-10902: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638939/HBASE-10902-v1-trunk.patch against trunk revision . ATTACHMENT ID: 12638939 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9209//console This message is automatically generated. Make Secure Bulk Load work across remote secure clusters Key: HBASE-10902 URL: https://issues.apache.org/jira/browse/HBASE-10902 Project: HBase Issue Type: Improvement Affects Versions: 0.96.1 Reporter: Jerry He Assignee: Jerry He Fix For: 0.99.0 Attachments: HBASE-10902-v0-0.96.patch, HBASE-10902-v1-trunk.patch Two secure clusters, both with kerberos enabled. Run bulk load on one cluster to load files from another cluster. biadmin@hdtest249:~ hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c TestTable_rr Bulk load failed. In the region server log: {code} 2014-04-02 20:04:56,361 ERROR org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to complete bulk load java.lang.IllegalArgumentException: Wrong FS: hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7, expected: hdfs://hdtest249.svl.ibm.com:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at
[jira] [Updated] (HBASE-10860) Insufficient AccessController covering permission check
[ https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-10860: --- Fix Version/s: (was: 0.98.1) 0.98.2 Insufficient AccessController covering permission check --- Key: HBASE-10860 URL: https://issues.apache.org/jira/browse/HBASE-10860 Project: HBase Issue Type: Bug Components: security Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10860.patch {code} ListCell list = (ListCell)entry.getValue(); if (list == null || list.isEmpty()) { get.addFamily(col); } else { for (Cell cell : list) { get.addColumn(col, CellUtil.cloneQualifier(cell)); } } {code} When a delete family Mutation comes, a Cell will be added into the list with Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting added against the check list == null || list.isEmpty(). We will fail getting the cells under this cf for covering permission check. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10860) Insufficient AccessController covering permission check
[ https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961658#comment-13961658 ] Anoop Sam John commented on HBASE-10860: This has not gone into 98.1. Changed the fix version back to 98.2 Insufficient AccessController covering permission check --- Key: HBASE-10860 URL: https://issues.apache.org/jira/browse/HBASE-10860 Project: HBase Issue Type: Bug Components: security Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10860.patch {code} ListCell list = (ListCell)entry.getValue(); if (list == null || list.isEmpty()) { get.addFamily(col); } else { for (Cell cell : list) { get.addColumn(col, CellUtil.cloneQualifier(cell)); } } {code} When a delete family Mutation comes, a Cell will be added into the list with Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting added against the check list == null || list.isEmpty(). We will fail getting the cells under this cf for covering permission check. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961659#comment-13961659 ] Mikhail Antonov commented on HBASE-9864: [~apurtell] when you're talking about propagated in the background with best effort and internal distributed non-persistent store, and that it doesn't have to be coupled to ZK, you mean that this store would be kind of option for the consensus library (referred via HBASE-10909), and that it would have 2 modes of replication - one for guaranteed propagation of distributed state (like part of distributed state machine and one for best effort propagation? Do I understand that correct? I.e. when you say best effort, what kind of guarantees you imply? Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10917) Fix hbase book Tests page
[ https://issues.apache.org/jira/browse/HBASE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] bharath v updated HBASE-10917: -- Status: Patch Available (was: Open) Fix hbase book Tests page --- Key: HBASE-10917 URL: https://issues.apache.org/jira/browse/HBASE-10917 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.99.0 Reporter: bharath v Assignee: bharath v Priority: Trivial Attachments: HBASE-10917.trunk.v1.patch The command specified to run all tests under the package using a wild card mvn test -Dtest=org.apache.hadoop.hbase.client.* doesnt work. Instead it should be mvn test '-Dtest=org.apache.hadoop.hbase.client.*' . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961699#comment-13961699 ] Andrew Purtell commented on HBASE-9864: --- Not a store for the consensus library, a distributed store/cache for internal use by components like the security coprocessors and namespace management (all of which currently do their own thing). By best effort I proposed above epidemic propagation. We could tune by interval and fanout. No guarantees besides a likelihood of convergence that can be derived from those parameters. Yes, another mode that guarantees propagation to all RegionServers or returns failure. We could add a simple gossip protocol for the first and use the pluggable distributed barrier facility for the second. The consensus package could handle the second also. Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961699#comment-13961699 ] Andrew Purtell edited comment on HBASE-9864 at 4/7/14 8:24 AM: --- Not a store for the consensus library, a distributed store/cache for internal use by components like the security coprocessors and namespace management (all of which currently do their own thing). By best effort I proposed above epidemic propagation. We could tune by interval and fanout. No guarantees besides a likelihood of convergence after an interval that can be derived from those parameters. Yes, another mode that guarantees propagation to all RegionServers or returns failure. We could add a simple gossip protocol for the first and use the pluggable distributed barrier facility for the second. The consensus package could handle the second also. was (Author: apurtell): Not a store for the consensus library, a distributed store/cache for internal use by components like the security coprocessors and namespace management (all of which currently do their own thing). By best effort I proposed above epidemic propagation. We could tune by interval and fanout. No guarantees besides a likelihood of convergence that can be derived from those parameters. Yes, another mode that guarantees propagation to all RegionServers or returns failure. We could add a simple gossip protocol for the first and use the pluggable distributed barrier facility for the second. The consensus package could handle the second also. Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961705#comment-13961705 ] Mikhail Antonov commented on HBASE-9864: So the emphasis in that case would be on performance over the strong consistency, right? So that if some piece of info hasn't been timely replicated to a particular node, it's fine - the request will look for them using remote call or..? Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10917) Fix hbase book Tests page
[ https://issues.apache.org/jira/browse/HBASE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961711#comment-13961711 ] Hadoop QA commented on HBASE-10917: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638897/HBASE-10917.trunk.v1.patch against trunk revision . ATTACHMENT ID: 12638897 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+0 tests included{color}. The patch appears to be a documentation patch that doesn't require tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:368) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9210//console This message is automatically generated. Fix hbase book Tests page --- Key: HBASE-10917 URL: https://issues.apache.org/jira/browse/HBASE-10917 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.99.0 Reporter: bharath v Assignee: bharath v Priority: Trivial Attachments: HBASE-10917.trunk.v1.patch The command specified to run all tests under the package using a wild card mvn test -Dtest=org.apache.hadoop.hbase.client.* doesnt work. Instead it should be mvn test '-Dtest=org.apache.hadoop.hbase.client.*' . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10296) Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency
[ https://issues.apache.org/jira/browse/HBASE-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961719#comment-13961719 ] Mikhail Antonov commented on HBASE-10296: - [~pablomedina85] you may want to take a look at HBASE-10909 (and pdf attached to it). As ZooKeeper is used in many places throughout, integration with other consensus libs to replace it would require certain refactoring of current codebase. Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency -- Key: HBASE-10296 URL: https://issues.apache.org/jira/browse/HBASE-10296 Project: HBase Issue Type: Brainstorming Components: master, Region Assignment, regionserver Reporter: Honghua Feng Currently master relies on ZK to elect active master, monitor liveness and store almost all of its states, such as region states, table info, replication info and so on. And zk also plays as a channel for master-regionserver communication(such as in region assigning) and client-regionserver communication(such as replication state/behavior change). But zk as a communication channel is fragile due to its one-time watch and asynchronous notification mechanism which together can leads to missed events(hence missed messages), for example the master must rely on the state transition logic's idempotence to maintain the region assigning state machine's correctness, actually almost all of the most tricky inconsistency issues can trace back their root cause to the fragility of zk as a communication channel. Replace zk with paxos running within master processes have following benefits: 1. better master failover performance: all master, either the active or the standby ones, have the same latest states in memory(except lag ones but which can eventually catch up later on). whenever the active master dies, the newly elected active master can immediately play its role without such failover work as building its in-memory states by consulting meta-table and zk. 2. better state consistency: master's in-memory states are the only truth about the system,which can eliminate inconsistency from the very beginning. and though the states are contained by all masters, paxos guarantees they are identical at any time. 3. more direct and simple communication pattern: client changes state by sending requests to master, master and regionserver talk directly to each other by sending request and response...all don't bother to using a third-party storage like zk which can introduce more uncertainty, worse latency and more complexity. 4. zk can only be used as liveness monitoring for determining if a regionserver is dead, and later on we can eliminate zk totally when we build heartbeat between master and regionserver. I know this might looks like a very crazy re-architect, but it deserves deep thinking and serious discussion for it, right? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10296) Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency
[ https://issues.apache.org/jira/browse/HBASE-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961721#comment-13961721 ] Mikhail Antonov commented on HBASE-10296: - Shall we update the title of this jira to reflect the fact that this consensus lib thing is broader than just master process (and failover performance specifically)? Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency -- Key: HBASE-10296 URL: https://issues.apache.org/jira/browse/HBASE-10296 Project: HBase Issue Type: Brainstorming Components: master, Region Assignment, regionserver Reporter: Honghua Feng Currently master relies on ZK to elect active master, monitor liveness and store almost all of its states, such as region states, table info, replication info and so on. And zk also plays as a channel for master-regionserver communication(such as in region assigning) and client-regionserver communication(such as replication state/behavior change). But zk as a communication channel is fragile due to its one-time watch and asynchronous notification mechanism which together can leads to missed events(hence missed messages), for example the master must rely on the state transition logic's idempotence to maintain the region assigning state machine's correctness, actually almost all of the most tricky inconsistency issues can trace back their root cause to the fragility of zk as a communication channel. Replace zk with paxos running within master processes have following benefits: 1. better master failover performance: all master, either the active or the standby ones, have the same latest states in memory(except lag ones but which can eventually catch up later on). whenever the active master dies, the newly elected active master can immediately play its role without such failover work as building its in-memory states by consulting meta-table and zk. 2. better state consistency: master's in-memory states are the only truth about the system,which can eliminate inconsistency from the very beginning. and though the states are contained by all masters, paxos guarantees they are identical at any time. 3. more direct and simple communication pattern: client changes state by sending requests to master, master and regionserver talk directly to each other by sending request and response...all don't bother to using a third-party storage like zk which can introduce more uncertainty, worse latency and more complexity. 4. zk can only be used as liveness monitoring for determining if a regionserver is dead, and later on we can eliminate zk totally when we build heartbeat between master and regionserver. I know this might looks like a very crazy re-architect, but it deserves deep thinking and serious discussion for it, right? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10296) Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency
[ https://issues.apache.org/jira/browse/HBASE-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961723#comment-13961723 ] Mikhail Antonov commented on HBASE-10296: - Linked to hbase-10909 as it seems like a prerequisite Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency -- Key: HBASE-10296 URL: https://issues.apache.org/jira/browse/HBASE-10296 Project: HBase Issue Type: Brainstorming Components: master, Region Assignment, regionserver Reporter: Honghua Feng Currently master relies on ZK to elect active master, monitor liveness and store almost all of its states, such as region states, table info, replication info and so on. And zk also plays as a channel for master-regionserver communication(such as in region assigning) and client-regionserver communication(such as replication state/behavior change). But zk as a communication channel is fragile due to its one-time watch and asynchronous notification mechanism which together can leads to missed events(hence missed messages), for example the master must rely on the state transition logic's idempotence to maintain the region assigning state machine's correctness, actually almost all of the most tricky inconsistency issues can trace back their root cause to the fragility of zk as a communication channel. Replace zk with paxos running within master processes have following benefits: 1. better master failover performance: all master, either the active or the standby ones, have the same latest states in memory(except lag ones but which can eventually catch up later on). whenever the active master dies, the newly elected active master can immediately play its role without such failover work as building its in-memory states by consulting meta-table and zk. 2. better state consistency: master's in-memory states are the only truth about the system,which can eliminate inconsistency from the very beginning. and though the states are contained by all masters, paxos guarantees they are identical at any time. 3. more direct and simple communication pattern: client changes state by sending requests to master, master and regionserver talk directly to each other by sending request and response...all don't bother to using a third-party storage like zk which can introduce more uncertainty, worse latency and more complexity. 4. zk can only be used as liveness monitoring for determining if a regionserver is dead, and later on we can eliminate zk totally when we build heartbeat between master and regionserver. I know this might looks like a very crazy re-architect, but it deserves deep thinking and serious discussion for it, right? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10295) Refactor the replication implementation to eliminate permanent zk node
[ https://issues.apache.org/jira/browse/HBASE-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961726#comment-13961726 ] Mikhail Antonov commented on HBASE-10295: - I'm thinking if this jira kind of fits the umbrella of HBASE-10909? Refactor the replication implementation to eliminate permanent zk node --- Key: HBASE-10295 URL: https://issues.apache.org/jira/browse/HBASE-10295 Project: HBase Issue Type: Bug Components: Replication Reporter: Honghua Feng Fix For: 0.99.0 Though this is a broader and bigger change, it original motivation derives from [HBASE-8751|https://issues.apache.org/jira/browse/HBASE-8751]: the newly introduced per-peer tableCFs attribute should be treated the same way as the peer-state, which is a permanent sub-node under peer node but using permanent zk node is deemed as an incorrect practice. So let's refactor to eliminate the permanent zk node. And the HBASE-8751 can then align its newly introduced per-peer tableCFs attribute with this *correct* implementation theme. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10295) Refactor the replication implementation to eliminate permanent zk node
[ https://issues.apache.org/jira/browse/HBASE-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961728#comment-13961728 ] Mikhail Antonov commented on HBASE-10295: - Just thinking it may be good to have all jiras which are about eliminating..something permanent in ZK under the same umbrella. Refactor the replication implementation to eliminate permanent zk node --- Key: HBASE-10295 URL: https://issues.apache.org/jira/browse/HBASE-10295 Project: HBase Issue Type: Bug Components: Replication Reporter: Honghua Feng Fix For: 0.99.0 Though this is a broader and bigger change, it original motivation derives from [HBASE-8751|https://issues.apache.org/jira/browse/HBASE-8751]: the newly introduced per-peer tableCFs attribute should be treated the same way as the peer-state, which is a permanent sub-node under peer node but using permanent zk node is deemed as an incorrect practice. So let's refactor to eliminate the permanent zk node. And the HBASE-8751 can then align its newly introduced per-peer tableCFs attribute with this *correct* implementation theme. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-7767) Get rid of ZKTable, and table enable/disable state in ZK
[ https://issues.apache.org/jira/browse/HBASE-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961729#comment-13961729 ] Mikhail Antonov commented on HBASE-7767: Shall this jira be part of HBASE-10909? Get rid of ZKTable, and table enable/disable state in ZK - Key: HBASE-7767 URL: https://issues.apache.org/jira/browse/HBASE-7767 Project: HBase Issue Type: Sub-task Components: Zookeeper Affects Versions: 0.95.2 Reporter: Enis Soztutar Assignee: Enis Soztutar As discussed table state in zookeeper for enable/disable state breaks our zookeeper contract. It is also very intrusive, used from the client side, master and region servers. We should get rid of it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper
[ https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961732#comment-13961732 ] Mikhail Antonov commented on HBASE-10866: - [~stack] looks like I can't edit my comments - is that something you could grant me? Decouple HLogSplitterHandler from ZooKeeper --- Key: HBASE-10866 URL: https://issues.apache.org/jira/browse/HBASE-10866 Project: HBase Issue Type: Sub-task Components: regionserver, Zookeeper Reporter: Mikhail Antonov Assignee: Mikhail Antonov Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, HBaseConsensus.pdf As some sort of follow-up or initial step towards HBASE-10296... Whatever consensus algorithm/library may be the chosen, perhaps on of first practical steps towards this goal would be to better abstract ZK-related API and details, which are now throughout the codebase (mostly leaked throuth ZkUtil, ZooKeeperWatcher and listeners). I'd like to propose a series of patches to help better abstract out zookeeper (and then help develop consensus APIs). Here is first version of patch for initial review (then I'm planning to work on another handlers in regionserver, and then perhaps start working on abstracting listeners). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10861) Supporting API in ByteRange
[ https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961718#comment-13961718 ] ramkrishna.s.vasudevan commented on HBASE-10861: bq.maybe split the ByteRange interface into ByteRange and MutableByteRange extends ByteRange? Makes sense +1. Supporting API in ByteRange --- Key: HBASE-10861 URL: https://issues.apache.org/jira/browse/HBASE-10861 Project: HBase Issue Type: Improvement Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Attachments: HBASE-10861.patch We would need APIs that would setLimit(int limit) getLimt() asReadOnly() These APIs would help in implementations that have Buffers offheap (for now BRs backed by DBB). If anything more is needed could be added when needed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10916) [VisibilityController] Stackable ScanLabelGenerators
[ https://issues.apache.org/jira/browse/HBASE-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961743#comment-13961743 ] Anoop Sam John commented on HBASE-10916: Whether we need to consider Table level ScanLabelGenerator also? [VisibilityController] Stackable ScanLabelGenerators Key: HBASE-10916 URL: https://issues.apache.org/jira/browse/HBASE-10916 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Assignee: Anoop Sam John Fix For: 0.99.0, 0.98.2 The ScanLabelGenerator is used by the VisibilityController to assemble the effective label set for a user in the RPC context before processing any request. Currently only one implementation of this interface can be installed, although which implementation to use can be specified in the site file. Instead it should be possible to stack multiple implementations of this component the same way we do coprocessors, installed with explicit priority with ties broken by a counter, where those implementations installed later in the chain have an opportunity to modify the pending effective label set. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10295) Refactor the replication implementation to eliminate permanent zk node
[ https://issues.apache.org/jira/browse/HBASE-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961744#comment-13961744 ] Andrew Purtell commented on HBASE-10295: Makes sense. Refactor the replication implementation to eliminate permanent zk node --- Key: HBASE-10295 URL: https://issues.apache.org/jira/browse/HBASE-10295 Project: HBase Issue Type: Bug Components: Replication Reporter: Honghua Feng Fix For: 0.99.0 Though this is a broader and bigger change, it original motivation derives from [HBASE-8751|https://issues.apache.org/jira/browse/HBASE-8751]: the newly introduced per-peer tableCFs attribute should be treated the same way as the peer-state, which is a permanent sub-node under peer node but using permanent zk node is deemed as an incorrect practice. So let's refactor to eliminate the permanent zk node. And the HBASE-8751 can then align its newly introduced per-peer tableCFs attribute with this *correct* implementation theme. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10918) [VisibilityController] System table backed ScanLabelGenerator
[ https://issues.apache.org/jira/browse/HBASE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961745#comment-13961745 ] ramkrishna.s.vasudevan commented on HBASE-10918: Can i take this after delete on Visibility Labels is done? [VisibilityController] System table backed ScanLabelGenerator -- Key: HBASE-10918 URL: https://issues.apache.org/jira/browse/HBASE-10918 Project: HBase Issue Type: Sub-task Reporter: Andrew Purtell Fix For: 0.99.0, 0.98.2 A ScanLabelGenerator that retrieves a static set of authorizations for a user or group from a new HBase system table, and insures these auths are part of the effective set. Useful for forcing a baseline set of auths for a user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961746#comment-13961746 ] Andrew Purtell commented on HBASE-9864: --- In this particular case we would be propagating updates for a distributed cache, where we want to loosly synchronize a cache kept on all RegionServers. If an update does arrive in time before we go to query the external system, then we have saved some work. Otherwise still no problem. So performance, yes, but lightweight more so. Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10801) Ensure DBE interfaces can work with Cell
[ https://issues.apache.org/jira/browse/HBASE-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10801: --- Status: Patch Available (was: Open) Ensure DBE interfaces can work with Cell Key: HBASE-10801 URL: https://issues.apache.org/jira/browse/HBASE-10801 Project: HBase Issue Type: Sub-task Reporter: ramkrishna.s.vasudevan Fix For: 0.99.0 Attachments: HBASE-10801.patch Some changes to the interfaces may be needed for DBEs or may be the way it works currently may be need to be modified inorder to make DBEs work with Cells. Suggestions and ideas welcome. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10295) Refactor the replication implementation to eliminate permanent zk node
[ https://issues.apache.org/jira/browse/HBASE-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961750#comment-13961750 ] Mikhail Antonov commented on HBASE-10295: - [~fenghh] [~stack] [~lhofhansl] is that ok if I move it there as subtask? Refactor the replication implementation to eliminate permanent zk node --- Key: HBASE-10295 URL: https://issues.apache.org/jira/browse/HBASE-10295 Project: HBase Issue Type: Bug Components: Replication Reporter: Honghua Feng Fix For: 0.99.0 Though this is a broader and bigger change, it original motivation derives from [HBASE-8751|https://issues.apache.org/jira/browse/HBASE-8751]: the newly introduced per-peer tableCFs attribute should be treated the same way as the peer-state, which is a permanent sub-node under peer node but using permanent zk node is deemed as an incorrect practice. So let's refactor to eliminate the permanent zk node. And the HBASE-8751 can then align its newly introduced per-peer tableCFs attribute with this *correct* implementation theme. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961751#comment-13961751 ] Mikhail Antonov commented on HBASE-9864: thanks for clarification! now better understand the usecase Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961757#comment-13961757 ] Anoop Sam John commented on HBASE-10883: Authorizations(ListString labels) Validation from here can be for label one after other so that in the Exception msg, u can clearly say which Auth label is invalid. Same applicable to VisibilityController#createVisibilityLabelFilter We can just use VisibilityLabelsValidator#isValidLabel(byte[] label) which is already there and used by put ? {code} throw new IllegalArgumentException(Authorizations cannot contain '(', ')' ,'' ,'|', '!' + + and cannot be empty :+label); {code} Error message can be bettter I think. This is invalid Auth *label* Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Assignee: ramkrishna.s.vasudevan Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch, HBASE-10883_2.patch, HBASE-10883_3.patch, HBASE-10883_4.patch, HBASE-10883_5.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961759#comment-13961759 ] Anoop Sam John commented on HBASE-9864: --- ACL data, Visibility labels and NS details, these are the current items which can use this Notification bus. correct? Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-8963) Add configuration option to skip HFile archiving
[ https://issues.apache.org/jira/browse/HBASE-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] bharath v updated HBASE-8963: - Attachment: HBASE-8963.trunk.v9.patch Attached v9 of patch. * Rebased it to trunk. Fixed a test to work as per changed method calls. compile target should work properly now * Fixed formatting issues as suggested by Ted (Thanks for the review). * Ran findbugs locally to make sure they don't hit the 100 char limit. * Closed the HTable instance properly in tests. Add configuration option to skip HFile archiving Key: HBASE-8963 URL: https://issues.apache.org/jira/browse/HBASE-8963 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: bharath v Fix For: 0.99.0 Attachments: HBASE-8963.trunk.v1.patch, HBASE-8963.trunk.v2.patch, HBASE-8963.trunk.v3.patch, HBASE-8963.trunk.v4.patch, HBASE-8963.trunk.v5.patch, HBASE-8963.trunk.v6.patch, HBASE-8963.trunk.v7.patch, HBASE-8963.trunk.v8.patch, HBASE-8963.trunk.v9.patch Currently HFileArchiver is always called when a table is dropped. A configuration option (either global or per table) should be provided so that archiving can be skipped when table is deleted. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8963) Add configuration option to skip HFile archiving
[ https://issues.apache.org/jira/browse/HBASE-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961774#comment-13961774 ] Enis Soztutar commented on HBASE-8963: -- I think Lars is right that making hbase.master.hfilecleaner.ttl configurable per table, with the global configuration used if not set is better than adding yet another config parameter. Add configuration option to skip HFile archiving Key: HBASE-8963 URL: https://issues.apache.org/jira/browse/HBASE-8963 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: bharath v Fix For: 0.99.0 Attachments: HBASE-8963.trunk.v1.patch, HBASE-8963.trunk.v2.patch, HBASE-8963.trunk.v3.patch, HBASE-8963.trunk.v4.patch, HBASE-8963.trunk.v5.patch, HBASE-8963.trunk.v6.patch, HBASE-8963.trunk.v7.patch, HBASE-8963.trunk.v8.patch, HBASE-8963.trunk.v9.patch Currently HFileArchiver is always called when a table is dropped. A configuration option (either global or per table) should be provided so that archiving can be skipped when table is deleted. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10920) HBase HConnectionManager.createConnection throws java.io.IOException: java.lang.reflect.InvocationTargetException
[ https://issues.apache.org/jira/browse/HBASE-10920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-10920. --- Resolution: Not a Problem TableMapReduceUtil methods should be setting the correct classpath including htrace. Resolving this. HBase HConnectionManager.createConnection throws java.io.IOException: java.lang.reflect.InvocationTargetException --- Key: HBASE-10920 URL: https://issues.apache.org/jira/browse/HBASE-10920 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.99.0 Reporter: Sean McCully Configuration conf = HBaseConfiguration.create(); conf.set(hbase.zookeeper.quorum,znode-1); conf.set(hbase.zookeeper.property.clientPort,2181); conf.set(hbase.master,master:16000); hconnection = HConnectionManager.createConnection(conf); //(Parser.java:40) = Results In == Error: java.io.IOException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:341) at org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:234) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:215) at com.example.job.Mapper.setup(Parser.java:40) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1597) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:169) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:339) ... 11 more Caused by: java.lang.NoClassDefFoundError: org/htrace/Trace at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:195) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:480) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:84) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:779) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.init(ConnectionManager.java:588) ... 16 more Caused by: java.lang.ClassNotFoundException: org.htrace.Trace at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 22 more -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Status: Open (was: Patch Available) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Assignee: ramkrishna.s.vasudevan Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch, HBASE-10883_2.patch, HBASE-10883_3.patch, HBASE-10883_4.patch, HBASE-10883_5.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Status: Patch Available (was: Open) Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Assignee: ramkrishna.s.vasudevan Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch, HBASE-10883_2.patch, HBASE-10883_3.patch, HBASE-10883_4.patch, HBASE-10883_5.patch, HBASE-10883_6.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-10883: --- Attachment: HBASE-10883_6.patch Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Assignee: ramkrishna.s.vasudevan Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch, HBASE-10883_2.patch, HBASE-10883_3.patch, HBASE-10883_4.patch, HBASE-10883_5.patch, HBASE-10883_6.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-9864) Notifications bus for use by cluster members keeping up-to-date on changes
[ https://issues.apache.org/jira/browse/HBASE-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961797#comment-13961797 ] Andrew Purtell commented on HBASE-9864: --- bq. ACL data, visibility labels, and NS details Yes and these would need/use the consistent propagation option. Also a new security related use case for LDAP attribute query cache that would only need lightweight propagation. Notifications bus for use by cluster members keeping up-to-date on changes -- Key: HBASE-9864 URL: https://issues.apache.org/jira/browse/HBASE-9864 Project: HBase Issue Type: Brainstorming Reporter: stack Priority: Blocker Fix For: 1.0.0 In namespaces and acls, zk callbacks are used so all participating servers are notified when there is a change in acls/namespaces list. The new visibility tags feature coming in copies the same model of using zk with listeners for the features' particular notifications. Three systems each w/ their own implementation of the notifications all using zk w/ their own feature-specific watchers. Should probably unify. Do we have to go via zk? Seems like all want to be notified when an hbase table is updated. Could we tell servers directly rather than go via zk? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10295) Refactor the replication implementation to eliminate permanent zk node
[ https://issues.apache.org/jira/browse/HBASE-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961801#comment-13961801 ] Enis Soztutar commented on HBASE-10295: --- bq. Make Master arbiter for these new system tables – only the master can mod them – and then add a response on the heartbeat to update regionservers on last edit? Could be as simple as master just replying w/ timestamp of last edit. We should do this as a part of (or using) HBASE-9864. I was thinking of something similar, where the data is kept in an hbase table as a snapshot + WAL. All transactions will have an trxid (NO timestamps please). All region servers open a session with a lease, and keep heartbeats to renew their lease. They send the last seen trxId, and the coordinator replies with the list of edits that they should apply to their in memory cache. If some reader looses it's leases, the coordinator (master) invalidates its session (so that there is an upper bound on the time the edits will be propogated). The coordinator keeps the last seen trxId per session, so that it can do recreate the snapshot and get rid of write ahead log entries. However, astute readers might have noticed that this is indeed similar to zk's own protocol except that the data is not replicated via ZAB, but via datanode pipelines and hbase. Refactor the replication implementation to eliminate permanent zk node --- Key: HBASE-10295 URL: https://issues.apache.org/jira/browse/HBASE-10295 Project: HBase Issue Type: Bug Components: Replication Reporter: Honghua Feng Fix For: 0.99.0 Though this is a broader and bigger change, it original motivation derives from [HBASE-8751|https://issues.apache.org/jira/browse/HBASE-8751]: the newly introduced per-peer tableCFs attribute should be treated the same way as the peer-state, which is a permanent sub-node under peer node but using permanent zk node is deemed as an incorrect practice. So let's refactor to eliminate the permanent zk node. And the HBASE-8751 can then align its newly introduced per-peer tableCFs attribute with this *correct* implementation theme. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-7767) Get rid of ZKTable, and table enable/disable state in ZK
[ https://issues.apache.org/jira/browse/HBASE-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961803#comment-13961803 ] Enis Soztutar commented on HBASE-7767: -- From my understanding 10909 deals with abstracting zk usage, while this would change the implementation to not use ZK at all. I would think this is part of the parent issue, mainly the metadata management in hbase. Get rid of ZKTable, and table enable/disable state in ZK - Key: HBASE-7767 URL: https://issues.apache.org/jira/browse/HBASE-7767 Project: HBase Issue Type: Sub-task Components: Zookeeper Affects Versions: 0.95.2 Reporter: Enis Soztutar Assignee: Enis Soztutar As discussed table state in zookeeper for enable/disable state breaks our zookeeper contract. It is also very intrusive, used from the client side, master and region servers. We should get rid of it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10861) Supporting API in ByteRange
[ https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961804#comment-13961804 ] Enis Soztutar commented on HBASE-10861: --- [~ndimiduk] FYI. Supporting API in ByteRange --- Key: HBASE-10861 URL: https://issues.apache.org/jira/browse/HBASE-10861 Project: HBase Issue Type: Improvement Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Attachments: HBASE-10861.patch We would need APIs that would setLimit(int limit) getLimt() asReadOnly() These APIs would help in implementations that have Buffers offheap (for now BRs backed by DBB). If anything more is needed could be added when needed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10917) Fix hbase book Tests page
[ https://issues.apache.org/jira/browse/HBASE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961806#comment-13961806 ] Enis Soztutar commented on HBASE-10917: --- The command actually works for me on bash. Maybe you are using a different shell or windows? But it won't hurt to commit it. Fix hbase book Tests page --- Key: HBASE-10917 URL: https://issues.apache.org/jira/browse/HBASE-10917 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.99.0 Reporter: bharath v Assignee: bharath v Priority: Trivial Attachments: HBASE-10917.trunk.v1.patch The command specified to run all tests under the package using a wild card mvn test -Dtest=org.apache.hadoop.hbase.client.* doesnt work. Instead it should be mvn test '-Dtest=org.apache.hadoop.hbase.client.*' . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10917) Fix hbase book Tests page
[ https://issues.apache.org/jira/browse/HBASE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-10917: -- Resolution: Fixed Fix Version/s: 0.99.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed this. Thanks Bharath. Fix hbase book Tests page --- Key: HBASE-10917 URL: https://issues.apache.org/jira/browse/HBASE-10917 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.99.0 Reporter: bharath v Assignee: bharath v Priority: Trivial Fix For: 0.99.0 Attachments: HBASE-10917.trunk.v1.patch The command specified to run all tests under the package using a wild card mvn test -Dtest=org.apache.hadoop.hbase.client.* doesnt work. Instead it should be mvn test '-Dtest=org.apache.hadoop.hbase.client.*' . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-8963) Add configuration option to skip HFile archiving
[ https://issues.apache.org/jira/browse/HBASE-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961811#comment-13961811 ] Hadoop QA commented on HBASE-8963: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638959/HBASE-8963.trunk.v9.patch against trunk revision . ATTACHMENT ID: 12638959 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 14 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.master.cleaner.TestSnapshotFromMaster org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testHBase3583(TestRegionObserverInterface.java:244) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9212//console This message is automatically generated. Add configuration option to skip HFile archiving Key: HBASE-8963 URL: https://issues.apache.org/jira/browse/HBASE-8963 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: bharath v Fix For: 0.99.0 Attachments: HBASE-8963.trunk.v1.patch, HBASE-8963.trunk.v2.patch, HBASE-8963.trunk.v3.patch, HBASE-8963.trunk.v4.patch, HBASE-8963.trunk.v5.patch, HBASE-8963.trunk.v6.patch, HBASE-8963.trunk.v7.patch, HBASE-8963.trunk.v8.patch, HBASE-8963.trunk.v9.patch Currently HFileArchiver is always called when a table is dropped. A configuration option (either global or per table) should be provided so that archiving can be skipped when table is deleted. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10856) Prep for 1.0
[ https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961819#comment-13961819 ] Enis Soztutar commented on HBASE-10856: --- Thanks Stack for creating this. Let's start linking the issues. From the above list, I really doubt some of the items will make in 1.0 (like HBASE-4047), but some of them are absolute must if you ask me. Especially, we should be focussing on the APIs a bit, and some correctness fixes (seqId-mvcc, dist log replay depends on this, and also HBASE-9905). Prep for 1.0 Key: HBASE-10856 URL: https://issues.apache.org/jira/browse/HBASE-10856 Project: HBase Issue Type: Umbrella Reporter: stack Tasks for 1.0 copied here from our '1.0.0' mailing list discussion. Idea is to file subtasks off this one. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961825#comment-13961825 ] Kashif J S commented on HBASE-10323: Any reason why this has not been integrated to 0.94.* versions yet ? Auto detect data block encoding in HFileOutputFormat Key: HBASE-10323 URL: https://issues.apache.org/jira/browse/HBASE-10323 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Ishan Chhabra Assignee: Ishan Chhabra Fix For: 0.98.0, 0.99.0 Attachments: HBASE_10323-0.94.15-v1.patch, HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch Currently, one has to specify the data block encoding of the table explicitly using the config parameter hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload load. This option is easily missed, not documented and also works differently than compression, block size and bloom filter type, which are auto detected. The solution would be to add support to auto detect datablock encoding similar to other parameters. The current patch does the following: 1. Automatically detects datablock encoding in HFileOutputFormat. 2. Keeps the legacy option of manually specifying the datablock encoding around as a method to override auto detections. 3. Moves string conf parsing to the start of the program so that it fails fast during starting up instead of failing during record writes. It also makes the internals of the program type safe. 4. Adds missing doc strings and unit tests for code serializing and deserializing config paramerters for bloom filer type, block size and datablock encoding. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations
[ https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961831#comment-13961831 ] Hadoop QA commented on HBASE-10883: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638965/HBASE-10883_6.patch against trunk revision . ATTACHMENT ID: 12638965 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9213//console This message is automatically generated. Restrict the universe of labels and authorizations -- Key: HBASE-10883 URL: https://issues.apache.org/jira/browse/HBASE-10883 Project: HBase Issue Type: Improvement Affects Versions: 0.98.1 Reporter: Andrew Purtell Assignee: ramkrishna.s.vasudevan Fix For: 0.99.0, 0.98.2 Attachments: HBASE-10883.patch, HBASE-10883_1.patch, HBASE-10883_2.patch, HBASE-10883_3.patch, HBASE-10883_4.patch, HBASE-10883_5.patch, HBASE-10883_6.patch Currently we allow any string as visibility label or request authorization. However as seen on HBASE-10878, we accept for authorizations strings that would not work if provided as labels in visibility expressions. We should throw an exception at least in cases where someone tries to define or use a label or authorization including visibility expression operators '', '|', '!', '(', ')'. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10917) Fix hbase book Tests page
[ https://issues.apache.org/jira/browse/HBASE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961845#comment-13961845 ] bharath v commented on HBASE-10917: --- [~enis] The problem seems to be only with zsh[1] and works fine on bash for me too (just checked). But it seems to work with quotes on either on them. So this patch could help those subset of users. Thanks for committing it. [1] https://github.com/robbyrussell/oh-my-zsh Fix hbase book Tests page --- Key: HBASE-10917 URL: https://issues.apache.org/jira/browse/HBASE-10917 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.99.0 Reporter: bharath v Assignee: bharath v Priority: Trivial Fix For: 0.99.0 Attachments: HBASE-10917.trunk.v1.patch The command specified to run all tests under the package using a wild card mvn test -Dtest=org.apache.hadoop.hbase.client.* doesnt work. Instead it should be mvn test '-Dtest=org.apache.hadoop.hbase.client.*' . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10817) Add some tests on a real cluster for replica: multi master, replication
[ https://issues.apache.org/jira/browse/HBASE-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961851#comment-13961851 ] Nicolas Liochon commented on HBASE-10817: - Committed to hbase-10070, thanks for the help, Nick. Thanks for the review, Devaraj. Add some tests on a real cluster for replica: multi master, replication --- Key: HBASE-10817 URL: https://issues.apache.org/jira/browse/HBASE-10817 Project: HBase Issue Type: Sub-task Components: master, regionserver, Replication Affects Versions: hbase-10070 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: hbase-10070 Attachments: 10817.v1.patch, 10817.v2.patch, 10817.v3.patch, 10817.v4.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10817) Add some tests on a real cluster for replica: multi master, replication
[ https://issues.apache.org/jira/browse/HBASE-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon resolved HBASE-10817. - Resolution: Fixed Hadoop Flags: Reviewed Add some tests on a real cluster for replica: multi master, replication --- Key: HBASE-10817 URL: https://issues.apache.org/jira/browse/HBASE-10817 Project: HBase Issue Type: Sub-task Components: master, regionserver, Replication Affects Versions: hbase-10070 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: hbase-10070 Attachments: 10817.v1.patch, 10817.v2.patch, 10817.v3.patch, 10817.v4.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961878#comment-13961878 ] Ted Yu commented on HBASE-10323: Created HBASE-10921 for the backport. Auto detect data block encoding in HFileOutputFormat Key: HBASE-10323 URL: https://issues.apache.org/jira/browse/HBASE-10323 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Ishan Chhabra Assignee: Ishan Chhabra Fix For: 0.98.0, 0.99.0 Attachments: HBASE_10323-0.94.15-v1.patch, HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch Currently, one has to specify the data block encoding of the table explicitly using the config parameter hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload load. This option is easily missed, not documented and also works differently than compression, block size and bloom filter type, which are auto detected. The solution would be to add support to auto detect datablock encoding similar to other parameters. The current patch does the following: 1. Automatically detects datablock encoding in HFileOutputFormat. 2. Keeps the legacy option of manually specifying the datablock encoding around as a method to override auto detections. 3. Moves string conf parsing to the start of the program so that it fails fast during starting up instead of failing during record writes. It also makes the internals of the program type safe. 4. Adds missing doc strings and unit tests for code serializing and deserializing config paramerters for bloom filer type, block size and datablock encoding. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96
Ted Yu created HBASE-10921: -- Summary: Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96 Key: HBASE-10921 URL: https://issues.apache.org/jira/browse/HBASE-10921 Project: HBase Issue Type: Task Reporter: Ted Yu This issue is to backport auto detection of data block encoding in HFileOutputFormat to 0.94 and 0.96 branches. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10018: Status: Open (was: Patch Available) Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.96.0, 0.98.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10018: Status: Patch Available (was: Open) Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.96.0, 0.98.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10018: Attachment: 10018.v6.patch Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961890#comment-13961890 ] Nicolas Liochon commented on HBASE-10018: - v6 fixes TestClientTimeouts TestVisibilityLabelsWithDistributedLogReplay works here, it's likely unrelated. Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96
[ https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S reassigned HBASE-10921: -- Assignee: Kashif J S Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96 -- Key: HBASE-10921 URL: https://issues.apache.org/jira/browse/HBASE-10921 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Kashif J S This issue is to backport auto detection of data block encoding in HFileOutputFormat to 0.94 and 0.96 branches. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96
[ https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961895#comment-13961895 ] Kashif J S commented on HBASE-10921: I have the patch ready for 0.94 version. But it seems some Unknown Server error while uploading the patch. I will try tomorrow. Also I will submit the patch for 0.96 tomorrow. Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96 -- Key: HBASE-10921 URL: https://issues.apache.org/jira/browse/HBASE-10921 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Kashif J S This issue is to backport auto detection of data block encoding in HFileOutputFormat to 0.94 and 0.96 branches. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96
[ https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-10921 started by Kashif J S. Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96 -- Key: HBASE-10921 URL: https://issues.apache.org/jira/browse/HBASE-10921 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Kashif J S This issue is to backport auto detection of data block encoding in HFileOutputFormat to 0.94 and 0.96 branches. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961898#comment-13961898 ] Enis Soztutar commented on HBASE-10018: --- +1 Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10917) Fix hbase book Tests page
[ https://issues.apache.org/jira/browse/HBASE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961945#comment-13961945 ] Hudson commented on HBASE-10917: FAILURE: Integrated in HBase-TRUNK #5069 (See [https://builds.apache.org/job/HBase-TRUNK/5069/]) HBASE-10917 Fix hbase book Tests page (bharath v) (enis: rev 1585467) * /hbase/trunk/src/main/docbkx/developer.xml Fix hbase book Tests page --- Key: HBASE-10917 URL: https://issues.apache.org/jira/browse/HBASE-10917 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.99.0 Reporter: bharath v Assignee: bharath v Priority: Trivial Fix For: 0.99.0 Attachments: HBASE-10917.trunk.v1.patch The command specified to run all tests under the package using a wild card mvn test -Dtest=org.apache.hadoop.hbase.client.* doesnt work. Instead it should be mvn test '-Dtest=org.apache.hadoop.hbase.client.*' . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10701) Cache invalidation improvements from client side
[ https://issues.apache.org/jira/browse/HBASE-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961952#comment-13961952 ] Nicolas Liochon commented on HBASE-10701: - I reviewed v4 I'm +1, just a nit that can be fixed on commit: {code} FutureResult f = cs.take(); if (f != null) { return f.get(); // great we got an answer } {code} In the secondaries part, I don't think that cs.take can return null. Cache invalidation improvements from client side Key: HBASE-10701 URL: https://issues.apache.org/jira/browse/HBASE-10701 Project: HBase Issue Type: Sub-task Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: hbase-10070 Attachments: hbase-10701_v1.patch, hbase-10701_v1.patch, hbase-10701_v2.patch, hbase-10701_v3.patch, hbase-10701_v4.patch Running the integration test in HBASE-10572, and HBASE-10355, it seems that we need some changes for cache invalidation of meta entries from the client side in backup RPCs. Mainly the RPC's made for replicas should not invalidate the cache for all the replicas (for example on RegionMovedException, connection error etc). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10922) Log splitting seems to hang
Jimmy Xiang created HBASE-10922: --- Summary: Log splitting seems to hang Key: HBASE-10922 URL: https://issues.apache.org/jira/browse/HBASE-10922 Project: HBase Issue Type: Bug Components: wal Reporter: Jimmy Xiang With distributed log replay enabled by default, I ran into an issue that log splitting hasn't completed after 13 hours. It seems to hang somewhere. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10922) Log splitting seems to hang
[ https://issues.apache.org/jira/browse/HBASE-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10922: Attachment: log-splitting_hang.png Attached the screen shot of the master web UI. Log splitting seems to hang --- Key: HBASE-10922 URL: https://issues.apache.org/jira/browse/HBASE-10922 Project: HBase Issue Type: Bug Components: wal Reporter: Jimmy Xiang Attachments: log-splitting_hang.png With distributed log replay enabled by default, I ran into an issue that log splitting hasn't completed after 13 hours. It seems to hang somewhere. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10419) Add multiget support to PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10419: Fix Version/s: 0.96.2 0.98.1 Add multiget support to PerformanceEvaluation - Key: HBASE-10419 URL: https://issues.apache.org/jira/browse/HBASE-10419 Project: HBase Issue Type: Improvement Components: test Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-10419-0.96.patch, HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, HBASE-10419-v4-trunk.patch, HBASE-10419.0.patch, HBASE-10419.1.patch Folks planning to use multiget may find this useful. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10419) Add multiget support to PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10419: Fix Version/s: 0.99.0 Add multiget support to PerformanceEvaluation - Key: HBASE-10419 URL: https://issues.apache.org/jira/browse/HBASE-10419 Project: HBase Issue Type: Improvement Components: test Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-10419-0.96.patch, HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, HBASE-10419-v4-trunk.patch, HBASE-10419.0.patch, HBASE-10419.1.patch Folks planning to use multiget may find this useful. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961974#comment-13961974 ] Nicolas Liochon commented on HBASE-10419: - Any issue if I commit this in hbase-10070? ping [~devaraj], [~enis] Add multiget support to PerformanceEvaluation - Key: HBASE-10419 URL: https://issues.apache.org/jira/browse/HBASE-10419 Project: HBase Issue Type: Improvement Components: test Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-10419-0.96.patch, HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, HBASE-10419-v4-trunk.patch, HBASE-10419.0.patch, HBASE-10419.1.patch Folks planning to use multiget may find this useful. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-7319) Extend Cell usage through read path
[ https://issues.apache.org/jira/browse/HBASE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-7319: -- Attachment: HBASE-7319.patch Changes KeyValue to Cell in the read path. ScanQueryMatch still uses KV. With this patch it would help BufferedDataBlockEncoders to use Cell and directly use it in comparison and during Kv.next() and kv.peek(). Extend Cell usage through read path --- Key: HBASE-7319 URL: https://issues.apache.org/jira/browse/HBASE-7319 Project: HBase Issue Type: Umbrella Components: Compaction, Performance, regionserver, Scanners Reporter: Matt Corgan Attachments: HBASE-7319.patch Umbrella issue for eliminating Cell copying. The Cell interface allows us to work with a reference to underlying bytes in the block cache without copying each Cell into consecutive bytes in an array (KeyValue). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-7319) Extend Cell usage through read path
[ https://issues.apache.org/jira/browse/HBASE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-7319: -- Status: Patch Available (was: Open) Extend Cell usage through read path --- Key: HBASE-7319 URL: https://issues.apache.org/jira/browse/HBASE-7319 Project: HBase Issue Type: Umbrella Components: Compaction, Performance, regionserver, Scanners Reporter: Matt Corgan Attachments: HBASE-7319.patch Umbrella issue for eliminating Cell copying. The Cell interface allows us to work with a reference to underlying bytes in the block cache without copying each Cell into consecutive bytes in an array (KeyValue). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10569) Co-locate meta and master
[ https://issues.apache.org/jira/browse/HBASE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961979#comment-13961979 ] Jimmy Xiang commented on HBASE-10569: - Meta regions of course can be assigned to other region servers too. As to Lars' concern, I was thinking to make it a load balancer decision about where to put meta regions. So it can be changed easily. Co-locate meta and master - Key: HBASE-10569 URL: https://issues.apache.org/jira/browse/HBASE-10569 Project: HBase Issue Type: Improvement Components: master, Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.99.0 Attachments: hbase-10569_v1.patch, hbase-10569_v2.patch, hbase-10569_v3.1.patch, hbase-10569_v3.patch I was thinking simplifying/improving the region assignments. The first step is to co-locate the meta and the master as many people agreed on HBASE-5487. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961983#comment-13961983 ] Nicolas Liochon commented on HBASE-10419: - There is a small defect btw: the multiget option does not show up in the printUsage. I will fix this as an addendum with the 10070 commit. Add multiget support to PerformanceEvaluation - Key: HBASE-10419 URL: https://issues.apache.org/jira/browse/HBASE-10419 Project: HBase Issue Type: Improvement Components: test Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-10419-0.96.patch, HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, HBASE-10419-v4-trunk.patch, HBASE-10419.0.patch, HBASE-10419.1.patch Folks planning to use multiget may find this useful. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10791) Add integration test to demonstrate performance improvement
[ https://issues.apache.org/jira/browse/HBASE-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961985#comment-13961985 ] Nicolas Liochon commented on HBASE-10791: - It seems it went stale, it does not apply anymore. Any help needed? Add integration test to demonstrate performance improvement --- Key: HBASE-10791 URL: https://issues.apache.org/jira/browse/HBASE-10791 Project: HBase Issue Type: Sub-task Components: Performance, test Affects Versions: hbase-10070 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Attachments: HBASE-10791.00.patch, HBASE-10791.01.patch, HBASE-10791.02.patch, HBASE-10791.03.patch, HBASE-10791.04.patch, IntegrationTestRegionReplicaPerf.out It would be good to demonstrate that use of region replicas reduces read latency. PerformanceEvaluation can be used manually for this purpose, but it's not able to use ChaosMonkey. An integration test can set up the monkey actions and automate execution. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961986#comment-13961986 ] Nicolas Liochon commented on HBASE-10419: - Hum, the multiget doc is in HBASE-10791 if i understand correctly. Add multiget support to PerformanceEvaluation - Key: HBASE-10419 URL: https://issues.apache.org/jira/browse/HBASE-10419 Project: HBase Issue Type: Improvement Components: test Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-10419-0.96.patch, HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, HBASE-10419-v4-trunk.patch, HBASE-10419.0.patch, HBASE-10419.1.patch Folks planning to use multiget may find this useful. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10922) Log splitting seems to hang
[ https://issues.apache.org/jira/browse/HBASE-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10922: Attachment: master-log-grep.txt Log splitting seems to hang --- Key: HBASE-10922 URL: https://issues.apache.org/jira/browse/HBASE-10922 Project: HBase Issue Type: Bug Components: wal Reporter: Jimmy Xiang Attachments: log-splitting_hang.png, master-log-grep.txt With distributed log replay enabled by default, I ran into an issue that log splitting hasn't completed after 13 hours. It seems to hang somewhere. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961998#comment-13961998 ] Hadoop QA commented on HBASE-10018: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638985/10018.v6.patch against trunk revision . ATTACHMENT ID: 12638985 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.procedure.TestZKProcedure {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.master.TestMasterNoCluster.testNotPullingDeadRegionServerFromZK(TestMasterNoCluster.java:298) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9214//console This message is automatically generated. Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10921) Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96
[ https://issues.apache.org/jira/browse/HBASE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-10921: --- Attachment: HBASE-10921-0.94-v1.patch Patch for 0.94 versions Port HBASE-10323 'Auto detect data block encoding in HFileOutputFormat' to 0.94 / 0.96 -- Key: HBASE-10921 URL: https://issues.apache.org/jira/browse/HBASE-10921 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Kashif J S Attachments: HBASE-10921-0.94-v1.patch This issue is to backport auto detection of data block encoding in HFileOutputFormat to 0.94 and 0.96 branches. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962010#comment-13962010 ] Nicolas Liochon edited comment on HBASE-10018 at 4/7/14 4:51 PM: - This looks like test flakiness. Commit is under way. was (Author: nkeywal): This look likes test flakiness. Commit is under way. Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962010#comment-13962010 ] Nicolas Liochon commented on HBASE-10018: - This look likes test flakiness. Commit is under way. Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10873) Control number of regions assigned to backup masters
[ https://issues.apache.org/jira/browse/HBASE-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10873: Attachment: hbase-10873.patch Control number of regions assigned to backup masters Key: HBASE-10873 URL: https://issues.apache.org/jira/browse/HBASE-10873 Project: HBase Issue Type: Improvement Components: Balancer Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.99.0 Attachments: hbase-10873.patch By default, a backup master is treated just like another regionserver. So it can host as many regions as other regionserver does. When the backup master becomes the active one, region balancer needs to move those user regions on this master to other region servers. To minimize the impact, it's better not to assign too many regions on backup masters. It may not be good to leave the backup masters idle and not host any region either. We should make this adjustable so that users can control how many regions to assign to each backup master. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10873) Control number of regions assigned to backup masters
[ https://issues.apache.org/jira/browse/HBASE-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10873: Status: Patch Available (was: Open) Attached the first patch. It's also on RB: https://reviews.apache.org/r/20088/ This patch introduced a weight concept for user regions assigned to active/backup master. Adjusting the weight can control the number of user regions assigned to backup masters. The weight logic is used for 1) region balancing, 2) round robin assignment, 3) random assignment. When retaining assignment finds the original server is not avaiable, the same random assignment logic based on weight is used to choose a new server. Control number of regions assigned to backup masters Key: HBASE-10873 URL: https://issues.apache.org/jira/browse/HBASE-10873 Project: HBase Issue Type: Improvement Components: Balancer Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.99.0 Attachments: hbase-10873.patch By default, a backup master is treated just like another regionserver. So it can host as many regions as other regionserver does. When the backup master becomes the active one, region balancer needs to move those user regions on this master to other region servers. To minimize the impact, it's better not to assign too many regions on backup masters. It may not be good to leave the backup masters idle and not host any region either. We should make this adjustable so that users can control how many regions to assign to each backup master. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10018: Resolution: Fixed Release Note: The location prefetch features is removed from the code. The related interfaces are kept for backward compatibility, but do nothing. Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed, thanks for the reviews, Stack Enis. Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10923) Control where to put meta region
Jimmy Xiang created HBASE-10923: --- Summary: Control where to put meta region Key: HBASE-10923 URL: https://issues.apache.org/jira/browse/HBASE-10923 Project: HBase Issue Type: Improvement Reporter: Jimmy Xiang There is a concern on placing meta regions on the master, as in the comments of HBASE-10569. I was thinking we should have a configuration for a load balancer to decide where to put it. Adjusting this configuration we can control whether to put the meta on master, or other region server. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10569) Co-locate meta and master
[ https://issues.apache.org/jira/browse/HBASE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962019#comment-13962019 ] Jimmy Xiang commented on HBASE-10569: - I filed HBASE-10923 to make it configurable as to where to assign the meta region. Co-locate meta and master - Key: HBASE-10569 URL: https://issues.apache.org/jira/browse/HBASE-10569 Project: HBase Issue Type: Improvement Components: master, Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.99.0 Attachments: hbase-10569_v1.patch, hbase-10569_v2.patch, hbase-10569_v3.1.patch, hbase-10569_v3.patch I was thinking simplifying/improving the region assignments. The first step is to co-locate the meta and the master as many people agreed on HBASE-5487. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962020#comment-13962020 ] Jean-Daniel Cryans commented on HBASE-10018: I guess I'm 5 minutes too late, but in the future would it be possible to set a better jira title once we know what the patch is going to look like? In this case, location prefetch isn't a thing, it's meta prefetching or region location prefetching. Also it wasn't changed, it was removed. I think we can now close this issue? HBASE-6841 Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Change the location prefetch
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962024#comment-13962024 ] Nicolas Liochon commented on HBASE-10018: - It's not too late to change the title at least. Let me do it. Change the location prefetch Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10018) Remove region location prefetching
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-10018: Summary: Remove region location prefetching (was: Change the location prefetch) Remove region location prefetching -- Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10911) ServerShutdownHandler#toString shows meaningless message
[ https://issues.apache.org/jira/browse/HBASE-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10911: Status: Patch Available (was: Open) ServerShutdownHandler#toString shows meaningless message Key: HBASE-10911 URL: https://issues.apache.org/jira/browse/HBASE-10911 Project: HBase Issue Type: Improvement Components: master Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Attachments: hbase-10911.patch SSH#toString returns the master server name, which is not so interesting. It's better to show the dead server's name instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters
[ https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jerry He updated HBASE-10902: - Attachment: HBASE-10902-v2-trunk.patch Make Secure Bulk Load work across remote secure clusters Key: HBASE-10902 URL: https://issues.apache.org/jira/browse/HBASE-10902 Project: HBase Issue Type: Improvement Affects Versions: 0.96.1 Reporter: Jerry He Assignee: Jerry He Fix For: 0.99.0 Attachments: HBASE-10902-v0-0.96.patch, HBASE-10902-v1-trunk.patch, HBASE-10902-v2-trunk.patch Two secure clusters, both with kerberos enabled. Run bulk load on one cluster to load files from another cluster. biadmin@hdtest249:~ hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c TestTable_rr Bulk load failed. In the region server log: {code} 2014-04-02 20:04:56,361 ERROR org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to complete bulk load java.lang.IllegalArgumentException: Wrong FS: hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7, expected: hdfs://hdtest249.svl.ibm.com:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223) at java.security.AccessController.doPrivileged(AccessController.java:300) at javax.security.auth.Subject.doAs(Subject.java:494) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223) at org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10018) Remove region location prefetching
[ https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962037#comment-13962037 ] Nicolas Liochon commented on HBASE-10018: - I'm not sure for HBASE-6841: for the last comment from Amit, it could be that the bottleneck is in the meta server, the lock could be just a red herring? On this other hand, if disabling the prefect worked for you, then yes HBASE-6841 is solved. Remove region location prefetching -- Key: HBASE-10018 URL: https://issues.apache.org/jira/browse/HBASE-10018 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.99.0 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 10018.v5.patch, 10018.v6.patch, 10018v3.patch Issues with prefetching are: - we do two calls to meta: one for the exact row, one for the prefetch - it's done in a lock - we take the next 10 regions. Why 10, why the 10 next? - is it useful if the table has 100K regions? Options are: - just remove it - replace it with a reverse scan: this would save a call -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters
[ https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962038#comment-13962038 ] Jerry He commented on HBASE-10902: -- Attached v2-trunk to address the 1 new Findbugs warning: Added parameter 'conf' in SecureBulkLoadListener constructor. Make Secure Bulk Load work across remote secure clusters Key: HBASE-10902 URL: https://issues.apache.org/jira/browse/HBASE-10902 Project: HBase Issue Type: Improvement Affects Versions: 0.96.1 Reporter: Jerry He Assignee: Jerry He Fix For: 0.99.0 Attachments: HBASE-10902-v0-0.96.patch, HBASE-10902-v1-trunk.patch, HBASE-10902-v2-trunk.patch Two secure clusters, both with kerberos enabled. Run bulk load on one cluster to load files from another cluster. biadmin@hdtest249:~ hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c TestTable_rr Bulk load failed. In the region server log: {code} 2014-04-02 20:04:56,361 ERROR org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to complete bulk load java.lang.IllegalArgumentException: Wrong FS: hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7, expected: hdfs://hdtest249.svl.ibm.com:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223) at java.security.AccessController.doPrivileged(AccessController.java:300) at javax.security.auth.Subject.doAs(Subject.java:494) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223) at org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HBASE-10911) ServerShutdownHandler#toString shows meaningless message
[ https://issues.apache.org/jira/browse/HBASE-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-10911: Attachment: hbase-10911.patch Attached a patch that uses the server name of the dead server instead of the master. ServerShutdownHandler#toString shows meaningless message Key: HBASE-10911 URL: https://issues.apache.org/jira/browse/HBASE-10911 Project: HBase Issue Type: Improvement Components: master Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Attachments: hbase-10911.patch SSH#toString returns the master server name, which is not so interesting. It's better to show the dead server's name instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962055#comment-13962055 ] Devaraj Das commented on HBASE-10419: - bq. Any issue if I commit this in hbase-10070? ping Devaraj Das, Enis Soztutar +1 for commit Add multiget support to PerformanceEvaluation - Key: HBASE-10419 URL: https://issues.apache.org/jira/browse/HBASE-10419 Project: HBase Issue Type: Improvement Components: test Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 0.96.2, 0.98.1, 0.99.0 Attachments: HBASE-10419-0.96.patch, HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, HBASE-10419-v4-trunk.patch, HBASE-10419.0.patch, HBASE-10419.1.patch Folks planning to use multiget may find this useful. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-7319) Extend Cell usage through read path
[ https://issues.apache.org/jira/browse/HBASE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962064#comment-13962064 ] Hadoop QA commented on HBASE-7319: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12639008/HBASE-7319.patch against trunk revision . ATTACHMENT ID: 12639008 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 45 new or modified tests. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 7 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +return Bytes.equals(left.getQualifierArray(), left.getQualifierOffset(), left.getQualifierLength(), +int len = KeyValue.writeByteArray(buffer, boffset, row, roffset, rlength, family, foffset, flength, + (nextJoinedKv != null CellUtil.matchingRow(nextJoinedKv, currentRow, offset, length)) + || (this.joinedHeap.requestSeek(KeyValueUtil.createFirstOnRow(currentRow, offset, length), {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor org.apache.hadoop.hbase.regionserver.TestReversibleScanners org.apache.hadoop.hbase.client.TestFromClientSide Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9215//console This message is automatically generated. Extend Cell usage through read path --- Key: HBASE-7319 URL: https://issues.apache.org/jira/browse/HBASE-7319 Project: HBase Issue Type: Umbrella Components: Compaction, Performance, regionserver, Scanners Reporter: Matt Corgan Attachments: HBASE-7319.patch Umbrella issue for eliminating Cell copying. The Cell interface allows us to work with a reference to underlying bytes in the block cache without copying each Cell into consecutive bytes in an array (KeyValue). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10911) ServerShutdownHandler#toString shows meaningless message
[ https://issues.apache.org/jira/browse/HBASE-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962083#comment-13962083 ] Matteo Bertozzi commented on HBASE-10911: - +1 looks good to me ServerShutdownHandler#toString shows meaningless message Key: HBASE-10911 URL: https://issues.apache.org/jira/browse/HBASE-10911 Project: HBase Issue Type: Improvement Components: master Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Attachments: hbase-10911.patch SSH#toString returns the master server name, which is not so interesting. It's better to show the dead server's name instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters
[ https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962101#comment-13962101 ] Jerry He commented on HBASE-10902: -- Tested successfully these scenarios: 1. Non-secure cluster. Load successfully. 2. One secure cluster: a) admin user that has required table permission on 'TestTable'. Load successfully b) 'user1' that does not have table permission on 'TestTable'. Load failed with insufficient permission error. c) After grant 'user1' 'RW' permission on 'TestTable', load successful. 3. Two secure clusters A and B. Load on cluster A with input dir pointing to cluster B. Same test cases as 2. above. Make Secure Bulk Load work across remote secure clusters Key: HBASE-10902 URL: https://issues.apache.org/jira/browse/HBASE-10902 Project: HBase Issue Type: Improvement Affects Versions: 0.96.1 Reporter: Jerry He Assignee: Jerry He Fix For: 0.99.0 Attachments: HBASE-10902-v0-0.96.patch, HBASE-10902-v1-trunk.patch, HBASE-10902-v2-trunk.patch Two secure clusters, both with kerberos enabled. Run bulk load on one cluster to load files from another cluster. biadmin@hdtest249:~ hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c TestTable_rr Bulk load failed. In the region server log: {code} 2014-04-02 20:04:56,361 ERROR org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to complete bulk load java.lang.IllegalArgumentException: Wrong FS: hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7, expected: hdfs://hdtest249.svl.ibm.com:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223) at java.security.AccessController.doPrivileged(AccessController.java:300) at javax.security.auth.Subject.doAs(Subject.java:494) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482) at org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223) at org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631) at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088) at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-10924) [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges
Aleksandr Shulman created HBASE-10924: - Summary: [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges Key: HBASE-10924 URL: https://issues.apache.org/jira/browse/HBASE-10924 Project: HBase Issue Type: Bug Components: Region Assignment Affects Versions: 0.94.15 Reporter: Aleksandr Shulman Assignee: Aleksandr Shulman Fix For: 0.94.19 Observed behavior: In about 5% of cases, my rolling upgrade tests fail because of stuck regions during a region server unload. My theory is that this occurs when region assignment information changes between the time the region list is generated, and the time when the region is to be moved. An example of such a region information change is a split or merge. Example: Regionserver A has 100 regions (#0-#99). The balancer is turned off and the regionmover script is called to unload this regionserver. The regionmover script will generate the list of 100 regions to be moved and then proceed down that list, moving the regions off in series. However, there is a region, #84, that has split into two daughter regions while regions 0-83 were moved. The script will be stuck trying to move #84, timeout, and then the failure will bubble up (attempt 1 failed). Proposed solution: This specific failure mode should be caught and the region_mover script should now attempt to move off all the regions. Now, it will have 16+1 (due to split) regions to move. There is a good chance that it will be able to move all 17 off without issues. However, should it encounter this same issue (attempt 2 failed), it will retry again. This process will continue until the maximum number of unload retry attempts has been reached. This is not foolproof, but let's say for the sake of argument that 5% of unload attempts hit this issue, then with a retry count of 3, it will reduce the unload failure probability from 0.05 to 0.000125 (0.05^3). Next steps: I am looking for feedback on this approach. If it seems like a sensible approach, I will create a strawman patch and test it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10924) [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges
[ https://issues.apache.org/jira/browse/HBASE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962112#comment-13962112 ] Aleksandr Shulman commented on HBASE-10924: --- That seems like a good place to put that logic since it'll be easier to maintain. As a bonus, we'll have implicit compatibility checks at compile time :) Only concern is that we don't break the shell api, but that shouldn't be difficult to maintain. [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges - Key: HBASE-10924 URL: https://issues.apache.org/jira/browse/HBASE-10924 Project: HBase Issue Type: Bug Components: Region Assignment Affects Versions: 0.94.15 Reporter: Aleksandr Shulman Assignee: Aleksandr Shulman Labels: region_mover, rolling_upgrade Fix For: 0.94.19 Observed behavior: In about 5% of cases, my rolling upgrade tests fail because of stuck regions during a region server unload. My theory is that this occurs when region assignment information changes between the time the region list is generated, and the time when the region is to be moved. An example of such a region information change is a split or merge. Example: Regionserver A has 100 regions (#0-#99). The balancer is turned off and the regionmover script is called to unload this regionserver. The regionmover script will generate the list of 100 regions to be moved and then proceed down that list, moving the regions off in series. However, there is a region, #84, that has split into two daughter regions while regions 0-83 were moved. The script will be stuck trying to move #84, timeout, and then the failure will bubble up (attempt 1 failed). Proposed solution: This specific failure mode should be caught and the region_mover script should now attempt to move off all the regions. Now, it will have 16+1 (due to split) regions to move. There is a good chance that it will be able to move all 17 off without issues. However, should it encounter this same issue (attempt 2 failed), it will retry again. This process will continue until the maximum number of unload retry attempts has been reached. This is not foolproof, but let's say for the sake of argument that 5% of unload attempts hit this issue, then with a retry count of 3, it will reduce the unload failure probability from 0.05 to 0.000125 (0.05^3). Next steps: I am looking for feedback on this approach. If it seems like a sensible approach, I will create a strawman patch and test it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10924) [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges
[ https://issues.apache.org/jira/browse/HBASE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962104#comment-13962104 ] Jean-Marc Spaggiari commented on HBASE-10924: - Should we use this oportunity to move that into a java class instead of a rd script? [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges - Key: HBASE-10924 URL: https://issues.apache.org/jira/browse/HBASE-10924 Project: HBase Issue Type: Bug Components: Region Assignment Affects Versions: 0.94.15 Reporter: Aleksandr Shulman Assignee: Aleksandr Shulman Labels: region_mover, rolling_upgrade Fix For: 0.94.19 Observed behavior: In about 5% of cases, my rolling upgrade tests fail because of stuck regions during a region server unload. My theory is that this occurs when region assignment information changes between the time the region list is generated, and the time when the region is to be moved. An example of such a region information change is a split or merge. Example: Regionserver A has 100 regions (#0-#99). The balancer is turned off and the regionmover script is called to unload this regionserver. The regionmover script will generate the list of 100 regions to be moved and then proceed down that list, moving the regions off in series. However, there is a region, #84, that has split into two daughter regions while regions 0-83 were moved. The script will be stuck trying to move #84, timeout, and then the failure will bubble up (attempt 1 failed). Proposed solution: This specific failure mode should be caught and the region_mover script should now attempt to move off all the regions. Now, it will have 16+1 (due to split) regions to move. There is a good chance that it will be able to move all 17 off without issues. However, should it encounter this same issue (attempt 2 failed), it will retry again. This process will continue until the maximum number of unload retry attempts has been reached. This is not foolproof, but let's say for the sake of argument that 5% of unload attempts hit this issue, then with a retry count of 3, it will reduce the unload failure probability from 0.05 to 0.000125 (0.05^3). Next steps: I am looking for feedback on this approach. If it seems like a sensible approach, I will create a strawman patch and test it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-10816) CatalogTracker abortable usage
[ https://issues.apache.org/jira/browse/HBASE-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang resolved HBASE-10816. - Resolution: Won't Fix I looked into it, and prefer not to do anything for now. CatalogTracker abortable usage -- Key: HBASE-10816 URL: https://issues.apache.org/jira/browse/HBASE-10816 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor In reviewing patch for HBASE-10569, Stack pointed out some existing issue with CatalogTracker. I looked into it and I think the abortable usage can be improved. * If ZK is null, when a new one is created, the abortable could be null. We need consider this. * The throwableAborter is to abort the process in case some ZK exception in MetaRegionTracker. In case the tracker is in a server, we don't need to do this, we can use the server as the abortable. In case the tracker is in a client, we can just abort the connection. Right? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10873) Control number of regions assigned to backup masters
[ https://issues.apache.org/jira/browse/HBASE-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962128#comment-13962128 ] Hadoop QA commented on HBASE-10873: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12639016/hbase-10873.patch against trunk revision . ATTACHMENT ID: 12639016 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 12 new or modified tests. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.procedure.TestZKProcedure Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9216//console This message is automatically generated. Control number of regions assigned to backup masters Key: HBASE-10873 URL: https://issues.apache.org/jira/browse/HBASE-10873 Project: HBase Issue Type: Improvement Components: Balancer Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.99.0 Attachments: hbase-10873.patch By default, a backup master is treated just like another regionserver. So it can host as many regions as other regionserver does. When the backup master becomes the active one, region balancer needs to move those user regions on this master to other region servers. To minimize the impact, it's better not to assign too many regions on backup masters. It may not be good to leave the backup masters idle and not host any region either. We should make this adjustable so that users can control how many regions to assign to each backup master. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-7767) Get rid of ZKTable, and table enable/disable state in ZK
[ https://issues.apache.org/jira/browse/HBASE-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962135#comment-13962135 ] Mikhail Antonov commented on HBASE-7767: right, agreed Get rid of ZKTable, and table enable/disable state in ZK - Key: HBASE-7767 URL: https://issues.apache.org/jira/browse/HBASE-7767 Project: HBase Issue Type: Sub-task Components: Zookeeper Affects Versions: 0.95.2 Reporter: Enis Soztutar Assignee: Enis Soztutar As discussed table state in zookeeper for enable/disable state breaks our zookeeper contract. It is also very intrusive, used from the client side, master and region servers. We should get rid of it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10923) Control where to put meta region
[ https://issues.apache.org/jira/browse/HBASE-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962148#comment-13962148 ] Mikhail Antonov commented on HBASE-10923: - How will it work when custom LB is plugged in? Will this configuration only affect default LB behavior? Control where to put meta region Key: HBASE-10923 URL: https://issues.apache.org/jira/browse/HBASE-10923 Project: HBase Issue Type: Improvement Reporter: Jimmy Xiang There is a concern on placing meta regions on the master, as in the comments of HBASE-10569. I was thinking we should have a configuration for a load balancer to decide where to put it. Adjusting this configuration we can control whether to put the meta on master, or other region server. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10923) Control where to put meta region
[ https://issues.apache.org/jira/browse/HBASE-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962153#comment-13962153 ] Jimmy Xiang commented on HBASE-10923: - Right, it should affect the default LB behavior. Custom LB can make its own decision on region placement. Control where to put meta region Key: HBASE-10923 URL: https://issues.apache.org/jira/browse/HBASE-10923 Project: HBase Issue Type: Improvement Reporter: Jimmy Xiang There is a concern on placing meta regions on the master, as in the comments of HBASE-10569. I was thinking we should have a configuration for a load balancer to decide where to put it. Adjusting this configuration we can control whether to put the meta on master, or other region server. -- This message was sent by Atlassian JIRA (v6.2#6252)