[jira] [Commented] (HBASE-14425) In Secure Zookeeper cluster superuser will not have sufficient permission if multiple values are configured in "hbase.superuser"
[ https://issues.apache.org/jira/browse/HBASE-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746941#comment-14746941 ] Anoop Sam John commented on HBASE-14425: bq.currently ZK doesn't support ACL setting for groups Ping [~rakeshr]. > In Secure Zookeeper cluster superuser will not have sufficient permission if > multiple values are configured in "hbase.superuser" > > > Key: HBASE-14425 > URL: https://issues.apache.org/jira/browse/HBASE-14425 > Project: HBase > Issue Type: Bug >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar > Fix For: 2.0.0 > > Attachments: HBASE-14425.patch > > > During master intialization we are setting ACLs for the znodes. > In ZKUtil.createACL(ZooKeeperWatcher zkw, String node, boolean > isSecureZooKeeper), > {code} > String superUser = zkw.getConfiguration().get("hbase.superuser"); > ArrayList acls = new ArrayList(); > // add permission to hbase supper user > if (superUser != null) { > acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); > } > {code} > Here we are directly setting "hbase.superuser" value to Znode which will > cause an issue when multiple values are configured. In "hbase.superuser" > multiple superusers and supergroups can be configured separated by comma. We > need to iterate them and set ACL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14400) Fix HBase RPC protection documentation
[ https://issues.apache.org/jira/browse/HBASE-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746977#comment-14746977 ] Hudson commented on HBASE-14400: FAILURE: Integrated in HBase-TRUNK #6811 (See [https://builds.apache.org/job/HBase-TRUNK/6811/]) HBASE-14400 Fix HBase RPC protection documentation (apurtell: rev fe2c4f630d3b5f3346c9ee9f95c256186c9e6907) * src/main/asciidoc/_chapters/security.adoc * hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslClientHandler.java * hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java * hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java * hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java > Fix HBase RPC protection documentation > -- > > Key: HBASE-14400 > URL: https://issues.apache.org/jira/browse/HBASE-14400 > Project: HBase > Issue Type: Bug > Components: encryption, rpc, security >Reporter: Apekshit Sharma >Assignee: Apekshit Sharma >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-14400-branch-0.98.patch, > HBASE-14400-branch-1.0.patch, HBASE-14400-branch-1.1.patch, > HBASE-14400-branch-1.2.patch, HBASE-14400-master-v2.patch, > HBASE-14400-master.patch > > > HBase configuration 'hbase.rpc.protection' can be set to 'authentication', > 'integrity' or 'privacy'. > "authentication means authentication only and no integrity or privacy; > integrity implies > authentication and integrity are enabled; and privacy implies all of > authentication, integrity and privacy are enabled." > However hbase ref guide incorrectly suggests in some places to set the value > to 'auth-conf' instead of 'privacy'. Setting value to 'auth-conf' doesn't > provide rpc encryption which is what user wants. > This jira will fix: > - documentation: change 'auth-conf' references to 'privacy' > - SaslUtil to support both set of values (privacy/integrity/authentication > and auth-conf/auth-int/auth) to be backward compatible with what was being > suggested till now. > - change 'hbase.thrift.security.qop' to be consistent with other similar > configurations by using same set of values (privacy/integrity/authentication). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14434) Merge of HBASE-7332 to 0.98 dropped a hunk
[ https://issues.apache.org/jira/browse/HBASE-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746944#comment-14746944 ] Hudson commented on HBASE-14434: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1075 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1075/]) HBASE-14434 Merge of HBASE-7332 to 0.98 dropped a hunk (apurtell: rev 13af5d2a24deacf30f4665b78d7316718ab47191) * hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java > Merge of HBASE-7332 to 0.98 dropped a hunk > -- > > Key: HBASE-14434 > URL: https://issues.apache.org/jira/browse/HBASE-14434 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.11 >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 0.98.15 > > Attachments: HBASE-14434-0.98.patch > > > The merge of HBASE-7332 to 0.98 dropped a hunk. Spotted by [~cuijianwei] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for edge row
Nathan created HBASE-14442: -- Summary: MultiTableInputFormatBase.getSplits function dosenot build split for edge row Key: HBASE-14442 URL: https://issues.apache.org/jira/browse/HBASE-14442 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 1.1.2 Reporter: Nathan I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found the map doesn't built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-7332) [webui] HMaster webui should display the number of regions a table has.
[ https://issues.apache.org/jira/browse/HBASE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746946#comment-14746946 ] Hudson commented on HBASE-7332: --- FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1075 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1075/]) HBASE-14434 Merge of HBASE-7332 to 0.98 dropped a hunk (apurtell: rev 13af5d2a24deacf30f4665b78d7316718ab47191) * hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java > [webui] HMaster webui should display the number of regions a table has. > --- > > Key: HBASE-7332 > URL: https://issues.apache.org/jira/browse/HBASE-7332 > Project: HBase > Issue Type: Bug > Components: UI >Affects Versions: 2.0.0, 1.1.0 >Reporter: Jonathan Hsieh >Assignee: Andrey Stepachev >Priority: Minor > Labels: beginner, operability > Fix For: 2.0.0, 1.1.0, 0.98.11 > > Attachments: HBASE-7332-0.98.patch, HBASE-7332.patch, > HBASE-7332.patch, Screen Shot 2014-07-28 at 4.10.01 PM.png, Screen Shot > 2015-02-03 at 9.23.57 AM.png > > > Pre-0.96/trunk hbase displayed the number of regions per table in the table > listing. Would be good to have this back. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14207) Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry
[ https://issues.apache.org/jira/browse/HBASE-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14746945#comment-14746945 ] Hudson commented on HBASE-14207: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1075 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1075/]) HBASE-14207 Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry (apurtell: rev 7344676074f3e9a57693d77558b432a188d76cee) * hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java > Region was hijacked and remained in transition when RS failed to open a > region and later regionplan changed to new RS on retry > -- > > Key: HBASE-14207 > URL: https://issues.apache.org/jira/browse/HBASE-14207 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.98.6 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Critical > Fix For: 0.98.15 > > Attachments: HBASE-14207-0.98-V2.patch, HBASE-14207-0.98-V2.patch, > HBASE-14207-0.98.patch > > > On production environment, following events happened > 1. Master is trying to assign a region to RS, but due to > KeeperException$SessionExpiredException RS failed to open the region. > In RS log, saw multiple WARN log related to > KeeperException$SessionExpiredException > > KeeperErrorCode = Session expired for > /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b > > Unable to get data of znode > /hbase/region-in-transition/08f1935d652e5dbdac09b423b8f9401b > 2. Master retried to assign the region to same RS, but RS again failed. > 3. On second retry new plan formed and this time plan destination (RS) is > different, so master send the request to new RS to open the region. But new > RS failed to open the region as there was server mismatch in ZNODE than the > expected current server name. > Logs Snippet: > {noformat} > HM > 2015-07-14 03:50:29,759 | INFO | master:T101PC03VM13:21300 | Processing > 08f1935d652e5dbdac09b423b8f9401b in state: M_ZK_REGION_OFFLINE | > org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:644) > 2015-07-14 03:50:29,759 | INFO | master:T101PC03VM13:21300 | Transitioned > {08f1935d652e5dbdac09b423b8f9401b state=OFFLINE, ts=1436817029679, > server=null} to {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN, > ts=1436817029759, server=T101PC03VM13,21302,1436816690692} | > org.apache.hadoop.hbase.master.RegionStates.updateRegionState(RegionStates.java:327) > 2015-07-14 03:50:29,760 | INFO | master:T101PC03VM13:21300 | Processed > region 08f1935d652e5dbdac09b423b8f9401b in state M_ZK_REGION_OFFLINE, on > server: T101PC03VM13,21302,1436816690692 | > org.apache.hadoop.hbase.master.AssignmentManager.processRegionsInTransition(AssignmentManager.java:768) > 2015-07-14 03:50:29,800 | INFO | > MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning > INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to > T101PC03VM13,21302,1436816690692 | > org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983) > 2015-07-14 03:50:29,801 | WARN | > MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of > INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to > T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=1 > of 10 | > org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077) > 2015-07-14 03:50:29,802 | INFO | > MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Trying to re-assign > INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to > the same failed server. | > org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2123) > 2015-07-14 03:50:31,804 | INFO | > MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Assigning > INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to > T101PC03VM13,21302,1436816690692 | > org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1983) > 2015-07-14 03:50:31,806 | WARN | > MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Failed assignment of > INTER_CONCURRENCY_SETTING,,1436596137981.08f1935d652e5dbdac09b423b8f9401b. to > T101PC03VM13,21302,1436816690692, trying to assign elsewhere instead; try=2 > of 10 | > org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2077) > 2015-07-14 03:50:31,807 | INFO | > MASTER_SERVER_OPERATIONS-T101PC03VM13:21300-3 | Transitioned > {08f1935d652e5dbdac09b423b8f9401b state=PENDING_OPEN,
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for edge row
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Description: I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found the map doesn't be built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" to the "<" should be added. was: I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found the map doesn't built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" to the "<" should be added. > MultiTableInputFormatBase.getSplits function dosenot build split for edge row > - > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found the map doesn't be built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for edge row
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Priority: Minor (was: Major) > MultiTableInputFormatBase.getSplits function dosenot build split for edge row > - > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Priority: Minor > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found the map doesn't be built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14380) Correct data gets skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper
[ https://issues.apache.org/jira/browse/HBASE-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747136#comment-14747136 ] Bhupendra Kumar Jain commented on HBASE-14380: -- Thanks Ted for committing the patches. > Correct data gets skipped along with bad data in importTsv bulk load thru > TsvImporterTextMapper > --- > > Key: HBASE-14380 > URL: https://issues.apache.org/jira/browse/HBASE-14380 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Bhupendra Kumar Jain >Assignee: Bhupendra Kumar Jain > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3 > > Attachments: 0001-HBASE-14380.patch, 14380-v2.txt, > HBASE-14380-branch-1.2-v1.patch, HBASE-14380-branch-1.2.patch, > HBASE-14380_v1.patch > > > Cosider the input data is as below > ROWKEY, TIEMSTAMP, Col_Value > r1,1,v1 >> Correct line > r1 >> Bad line > r1,3,v3 >> Correct line > r1,4,v4 >> Correct line > When data is bulk loaded using importTsv with mapper as TsvImporterTextMapper > , All the lines are getting ignored even though skipBadLines is set to true. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits dosenot build split for startRow=stopRow=startrow of a region
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Summary: MultiTableInputFormatBase.getSplits dosenot build split for startRow=stopRow=startrow of a region (was: MultiTableInputFormatBase.getSplits dosenot build split for startRow of region) > MultiTableInputFormatBase.getSplits dosenot build split for > startRow=stopRow=startrow of a region > - > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment
[ https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747190#comment-14747190 ] Hadoop QA commented on HBASE-6721: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756192/HBASE-6721_14.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756192 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 27 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 6 warning messages. {color:red}-1 checkstyle{color}. The applied patch generated 1859 checkstyle errors (more than the master's current 1835 errors). {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + * rpc GetGroupInfo(.hbase.pb.GetGroupInfoRequest) returns (.hbase.pb.GetGroupInfoResponse); + * rpc GetGroupInfoOfTable(.hbase.pb.GetGroupInfoOfTableRequest) returns (.hbase.pb.GetGroupInfoOfTableResponse); + * rpc GetGroupInfoOfServer(.hbase.pb.GetGroupInfoOfServerRequest) returns (.hbase.pb.GetGroupInfoOfServerResponse); + * rpc MoveServers(.hbase.pb.MoveServersRequest) returns (.hbase.pb.MoveServersResponse); + * rpc MoveTables(.hbase.pb.MoveTablesRequest) returns (.hbase.pb.MoveTablesResponse); + * rpc RemoveGroup(.hbase.pb.RemoveGroupRequest) returns (.hbase.pb.RemoveGroupResponse); + * rpc BalanceGroup(.hbase.pb.BalanceGroupRequest) returns (.hbase.pb.BalanceGroupResponse); + * rpc ListGroupInfos(.hbase.pb.ListGroupInfosRequest) returns (.hbase.pb.ListGroupInfosResponse); + * rpc GetGroupInfo(.hbase.pb.GetGroupInfoRequest) returns (.hbase.pb.GetGroupInfoResponse); + * rpc GetGroupInfoOfTable(.hbase.pb.GetGroupInfoOfTableRequest) returns (.hbase.pb.GetGroupInfoOfTableResponse); {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestAsyncProcess Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15618//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15618//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15618//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15618//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15618//console This message is automatically generated. > RegionServer Group based Assignment > --- > > Key: HBASE-6721 > URL: https://issues.apache.org/jira/browse/HBASE-6721 > Project: HBase > Issue Type: New Feature >Reporter: Francis Liu >Assignee: Francis Liu > Labels: hbase-6721 > Attachments: 6721-master-webUI.patch, HBASE-6721 > GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, > HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, > HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, > HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, > HBASE-6721_8.patch, HBASE-6721_9.patch, HBASE-6721_9.patch, > HBASE-6721_94.patch, HBASE-6721_94.patch, HBASE-6721_94_2.patch, > HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, HBASE-6721_94_4.patch, > HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, HBASE-6721_94_7.patch, > HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, > HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, > HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, > HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, > immediateAssignments Sequence Diagram.svg, randomAssignment Sequence > Diagram.svg, retainAssignment Sequence Diagram.svg, roundRobinAssignment > Sequence Diagram.svg > >
[jira] [Commented] (HBASE-14429) Checkstyle report is broken
[ https://issues.apache.org/jira/browse/HBASE-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747211#comment-14747211 ] Hudson commented on HBASE-14429: FAILURE: Integrated in HBase-TRUNK #6812 (See [https://builds.apache.org/job/HBase-TRUNK/6812/]) HBASE-14429 Checkstyle report is broken (stack: rev 27a993d83747bf66ba3b79ddd7ce595d897107bd) * dev-support/checkstyle_report.py > Checkstyle report is broken > --- > > Key: HBASE-14429 > URL: https://issues.apache.org/jira/browse/HBASE-14429 > Project: HBase > Issue Type: Bug > Components: scripts >Affects Versions: 1.1.2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-14429.001.patch > > > I just happened across this when hunting for a checkstyle reporter. The > checkstyle_report.py script is broken. The output is garbled because the > print_row() method is printing the wrong variables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14433) Set down the client executor core thread count from 256 to number of processors
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747212#comment-14747212 ] Hudson commented on HBASE-14433: FAILURE: Integrated in HBase-TRUNK #6812 (See [https://builds.apache.org/job/HBase-TRUNK/6812/]) HBASE-14433 Set down the client executor core thread count from 256 to number of processors (stack: rev d2e338181800ae3cef55ddca491901b65259dc7f) * hbase-server/src/test/resources/hbase-site.xml * hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java * hbase-client/src/test/resources/hbase-site.xml > Set down the client executor core thread count from 256 to number of > processors > --- > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails
[ https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Singhal updated HBASE-14280: -- Status: Patch Available (was: Open) > Bulk Upload from HA cluster to remote HA hbase cluster fails > > > Key: HBASE-14280 > URL: https://issues.apache.org/jira/browse/HBASE-14280 > Project: HBase > Issue Type: Bug > Components: hadoop2, regionserver >Affects Versions: 0.98.4 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Minor > Labels: easyfix, patch > Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch, > HBASE-14280_v3.patch > > > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451) > at > org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078) > ... 4 more > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498) > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) > at > org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548) > ... 11 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14278) Fix NPE that is showing up since HBASE-14274 went in
[ https://issues.apache.org/jira/browse/HBASE-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-14278: -- Attachment: HBASE-14278-v5.patch > Fix NPE that is showing up since HBASE-14274 went in > > > Key: HBASE-14278 > URL: https://issues.apache.org/jira/browse/HBASE-14278 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 2.0.0, 1.2.0, 1.3.0 >Reporter: stack >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: HBASE-14278-v1.patch, HBASE-14278-v2.patch, > HBASE-14278-v3.patch, HBASE-14278-v4.patch, HBASE-14278-v5.patch, > HBASE-14278.patch > > > Saw this in TestDistributedLogSplitting after HBASE-14274 was applied. > {code} > 119113 2015-08-20 15:31:10,704 WARN [HBase-Metrics2-1] > impl.MetricsConfig(124): Cannot locate configuration: tried > hadoop-metrics2-hbase.properties,hadoop-metrics2.properties > 119114 2015-08-20 15:31:10,710 ERROR [HBase-Metrics2-1] > lib.MethodMetric$2(118): Error invoking method getBlocksTotal > 119115 java.lang.reflect.InvocationTargetException > 119116 › at sun.reflect.GeneratedMethodAccessor72.invoke(Unknown Source) > 119117 › at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 119118 › at java.lang.reflect.Method.invoke(Method.java:606) > 119119 › at > org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) > 119120 › at > org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) > 119121 › at > org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387) > 119122 › at > org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) > 119123 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195) > 119124 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) > 119125 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) > 119126 › at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > 119127 › at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > 119128 › at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > 119129 › at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) > 119130 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:221) > 119131 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:96) > 119132 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:245) > 119133 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:229) > 119134 › at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) > 119135 › at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 119136 › at java.lang.reflect.Method.invoke(Method.java:606) > 119137 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:290) > 119138 › at com.sun.proxy.$Proxy13.postStart(Unknown Source) > 119139 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:185) > 119140 › at > org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:81) > 119141 › at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > 119142 › at java.util.concurrent.FutureTask.run(FutureTask.java:262) > 119143 › at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) > 119144 › at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) > 119145 › at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > 119146 › at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > 119147 › at java.lang.Thread.run(Thread.java:744) > 119148 Caused by: java.lang.NullPointerException > 119149 › at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.size(BlocksMap.java:198) > 119150 › at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getTotalBlocks(BlockManager.java:3158) > 119151 › at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlocksTotal(FSNamesystem.java:5652) > 119152 › ... 32 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14411) Fix unit test failures when using multiwal as default WAL provider
[ https://issues.apache.org/jira/browse/HBASE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747105#comment-14747105 ] Hadoop QA commented on HBASE-14411: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756166/HBASE-14411.branch-1.patch against branch-1 branch at commit fe2c4f630d3b5f3346c9ee9f95c256186c9e6907. ATTACHMENT ID: 12756166 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestWALLockup {color:red}-1 core zombie tests{color}. There are 4 zombie test(s): at org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes.testRedirect(TestWebHdfsWithMultipleNameNodes.java:122) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15614//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15614//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15614//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15614//console This message is automatically generated. > Fix unit test failures when using multiwal as default WAL provider > -- > > Key: HBASE-14411 > URL: https://issues.apache.org/jira/browse/HBASE-14411 > Project: HBase > Issue Type: Bug >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0 > > Attachments: HBASE-14411.branch-1.patch, HBASE-14411.patch, > HBASE-14411_v2.patch > > > If we set hbase.wal.provider to multiwal in > hbase-server/src/test/resources/hbase-site.xml which allows us to use > BoundedRegionGroupingProvider in UT, we will observe below failures in > current code base: > {noformat} > Failed tests: > TestHLogRecordReader>TestWALRecordReader.testPartialRead:164 expected:<1> > but was:<2> > TestHLogRecordReader>TestWALRecordReader.testWALRecordReader:216 > expected:<2> but was:<3> > TestWALRecordReader.testPartialRead:164 expected:<1> but was:<2> > TestWALRecordReader.testWALRecordReader:216 expected:<2> but was:<3> > TestDistributedLogSplitting.testRecoveredEdits:276 edits dir should have > more than a single file in it. instead has 1 > TestAtomicOperation.testMultiRowMutationMultiThreads:499 expected:<0> but > was:<1> > TestHRegionServerBulkLoad.testAtomicBulkLoad:307 > Expected: is > but: was > TestLogRolling.testCompactionRecordDoesntBlockRolling:611 Should have WAL; > one table is not flushed expected:<1> but was:<0> > TestLogRolling.testLogRollOnDatanodeDeath:359 null > TestLogRolling.testLogRollOnPipelineRestart:472 Missing datanode should've > triggered a log roll > TestReplicationSourceManager.testLogRoll:237 expected:<6> but was:<7> > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 if > skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong number of files in the > archive log expected:<11> but was:<12> >
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits dosenot build split for startRow of region
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Description: I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found no map was built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" should be added. was: I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found no map was built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" to the "<" should be added. > MultiTableInputFormatBase.getSplits dosenot build split for startRow of region > -- > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for startRow of region
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Summary: MultiTableInputFormatBase.getSplits function dosenot build split for startRow of region (was: MultiTableInputFormatBase.getSplits function dosenot build split for edge row) > MultiTableInputFormatBase.getSplits function dosenot build split for startRow > of region > --- > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for edge row
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Description: I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found no map was built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" to the "<" should be added. was: I created a Scan whose startRow and stopRow are the same with a region's startRow, then I found the map doesn't be built. The following is the source code of this condtion: (startRow.length == 0 || keys.getSecond()[i].length == 0 || Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && (stopRow.length == 0 || Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0) I think a "=" to the "<" should be added. > MultiTableInputFormatBase.getSplits function dosenot build split for edge row > - > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits dosenot build split for startRow of region
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Summary: MultiTableInputFormatBase.getSplits dosenot build split for startRow of region (was: MultiTableInputFormatBase.getSplits function dosenot build split for startRow of region) > MultiTableInputFormatBase.getSplits dosenot build split for startRow of region > -- > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14334) Move Memcached block cache in to it's own optional module.
[ https://issues.apache.org/jira/browse/HBASE-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747016#comment-14747016 ] Elliott Clark commented on HBASE-14334: --- ping? > Move Memcached block cache in to it's own optional module. > -- > > Key: HBASE-14334 > URL: https://issues.apache.org/jira/browse/HBASE-14334 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0 > > Attachments: HBASE-14334.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14433) Set down the client executor core thread count from 256 to number of processors
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747103#comment-14747103 ] Hadoop QA commented on HBASE-14433: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756161/14433v3.txt against master branch at commit fe2c4f630d3b5f3346c9ee9f95c256186c9e6907. ATTACHMENT ID: 12756161 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15613//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15613//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15613//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15613//console This message is automatically generated. > Set down the client executor core thread count from 256 to number of > processors > --- > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails
[ https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Singhal updated HBASE-14280: -- Attachment: (was: HBASE-14280_v3.patch) > Bulk Upload from HA cluster to remote HA hbase cluster fails > > > Key: HBASE-14280 > URL: https://issues.apache.org/jira/browse/HBASE-14280 > Project: HBase > Issue Type: Bug > Components: hadoop2, regionserver >Affects Versions: 0.98.4 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Minor > Labels: easyfix, patch > Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch, > HBASE-14280_v3.patch > > > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451) > at > org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078) > ... 4 more > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498) > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) > at > org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548) > ... 11 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails
[ https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Singhal updated HBASE-14280: -- Attachment: HBASE-14280_v3.patch > Bulk Upload from HA cluster to remote HA hbase cluster fails > > > Key: HBASE-14280 > URL: https://issues.apache.org/jira/browse/HBASE-14280 > Project: HBase > Issue Type: Bug > Components: hadoop2, regionserver >Affects Versions: 0.98.4 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Minor > Labels: easyfix, patch > Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch, > HBASE-14280_v3.patch > > > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451) > at > org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078) > ... 4 more > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498) > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) > at > org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548) > ... 11 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails
[ https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Singhal updated HBASE-14280: -- Status: Open (was: Patch Available) > Bulk Upload from HA cluster to remote HA hbase cluster fails > > > Key: HBASE-14280 > URL: https://issues.apache.org/jira/browse/HBASE-14280 > Project: HBase > Issue Type: Bug > Components: hadoop2, regionserver >Affects Versions: 0.98.4 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Minor > Labels: easyfix, patch > Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch, > HBASE-14280_v3.patch > > > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451) > at > org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078) > ... 4 more > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498) > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) > at > org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548) > ... 11 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-6721) RegionServer Group based Assignment
[ https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francis Liu updated HBASE-6721: --- Attachment: HBASE-6721_14.patch Addressed comments on RB. > RegionServer Group based Assignment > --- > > Key: HBASE-6721 > URL: https://issues.apache.org/jira/browse/HBASE-6721 > Project: HBase > Issue Type: New Feature >Reporter: Francis Liu >Assignee: Francis Liu > Labels: hbase-6721 > Attachments: 6721-master-webUI.patch, HBASE-6721 > GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, > HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, > HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, > HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, > HBASE-6721_8.patch, HBASE-6721_9.patch, HBASE-6721_9.patch, > HBASE-6721_94.patch, HBASE-6721_94.patch, HBASE-6721_94_2.patch, > HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, HBASE-6721_94_4.patch, > HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, HBASE-6721_94_7.patch, > HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, > HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, > HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, > HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, > immediateAssignments Sequence Diagram.svg, randomAssignment Sequence > Diagram.svg, retainAssignment Sequence Diagram.svg, roundRobinAssignment > Sequence Diagram.svg > > > In multi-tenant deployments of HBase, it is likely that a RegionServer will > be serving out regions from a number of different tables owned by various > client applications. Being able to group a subset of running RegionServers > and assign specific tables to it, provides a client application a level of > isolation and resource allocation. > The proposal essentially is to have an AssignmentManager which is aware of > RegionServer groups and assigns tables to region servers based on groupings. > Load balancing will occur on a per group basis as well. > This is essentially a simplification of the approach taken in HBASE-4120. See > attached document. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for edge row
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Priority: Major (was: Minor) > MultiTableInputFormatBase.getSplits function dosenot build split for edge row > - > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found the map doesn't be built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer
[ https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747145#comment-14747145 ] Hadoop QA commented on HBASE-12751: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756170/12751.v38.txt against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756170 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 99 new or modified tests. {color:red}-1 Anti-pattern{color}. The patch appears to have anti-pattern where BYTES_COMPARATOR was omitted: -getRegionInfo(), -1, new TreeMap());. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15616//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15616//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15616//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15616//console This message is automatically generated. > Allow RowLock to be reader writer > - > > Key: HBASE-12751 > URL: https://issues.apache.org/jira/browse/HBASE-12751 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0, 1.3.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.3.0 > > Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, > 12751.rebased.v26.txt, 12751.rebased.v27.txt, 12751.rebased.v29.txt, > 12751.rebased.v31.txt, 12751.rebased.v32.txt, 12751.rebased.v32.txt, > 12751.rebased.v33.txt, 12751.rebased.v34.txt, 12751.rebased.v35.txt, > 12751.rebased.v35.txt, 12751.rebased.v35.txt, 12751.v37.txt, 12751.v38.txt, > 12751v22.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, > 12751v36.txt, HBASE-12751-v1.patch, HBASE-12751-v10.patch, > HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, > HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, > HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v18.patch, > HBASE-12751-v19 (1).patch, HBASE-12751-v19.patch, HBASE-12751-v2.patch, > HBASE-12751-v20.patch, HBASE-12751-v20.patch, HBASE-12751-v21.patch, > HBASE-12751-v3.patch, HBASE-12751-v4.patch, HBASE-12751-v5.patch, > HBASE-12751-v6.patch, HBASE-12751-v7.patch, HBASE-12751-v8.patch, > HBASE-12751-v9.patch, HBASE-12751.patch > > > Right now every write operation grabs a row lock. This is to prevent values > from changing during a read modify write operation (increment or check and > put). However it limits parallelism in several different scenarios. > If there are several puts to the same row but different columns or stores > then this is very limiting. > If there are puts to the same column then mvcc number should ensure a > consistent ordering. So locking is not needed. > However locking for check and put or increment is still needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job
[ https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-11378: --- Assignee: Jimmy Xiang (was: Nathan) > TableMapReduceUtil overwrites user supplied options for multiple > tables/scaners job > --- > > Key: HBASE-11378 > URL: https://issues.apache.org/jira/browse/HBASE-11378 > Project: HBase > Issue Type: Bug >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.99.0, 0.98.4 > > Attachments: hbase-11378.patch > > > In TableMapReduceUtil#initTableMapperJob, we have > HBaseConfiguration.addHbaseResources(job.getConfiguration()); > It should use merge instead. Otherwise, user supplied options are overwritten. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-11378) TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job
[ https://issues.apache.org/jira/browse/HBASE-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan reassigned HBASE-11378: -- Assignee: Nathan (was: Jimmy Xiang) > TableMapReduceUtil overwrites user supplied options for multiple > tables/scaners job > --- > > Key: HBASE-11378 > URL: https://issues.apache.org/jira/browse/HBASE-11378 > Project: HBase > Issue Type: Bug >Reporter: Jimmy Xiang >Assignee: Nathan > Fix For: 0.99.0, 0.98.4 > > Attachments: hbase-11378.patch > > > In TableMapReduceUtil#initTableMapperJob, we have > HBaseConfiguration.addHbaseResources(job.getConfiguration()); > It should use merge instead. Otherwise, user supplied options are overwritten. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-14442) MultiTableInputFormatBase.getSplits function dosenot build split for edge row
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan reassigned HBASE-14442: -- Assignee: Nathan > MultiTableInputFormatBase.getSplits function dosenot build split for edge row > - > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan >Priority: Minor > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found the map doesn't be built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" to the "<" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12298) Support BB usage in PrefixTree
[ https://issues.apache.org/jira/browse/HBASE-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-12298: --- Attachment: HBASE-12298_7.patch Updated patch. Changed the implementation of the new get() API in SBB and MBB and renamed the param ordering. > Support BB usage in PrefixTree > -- > > Key: HBASE-12298 > URL: https://issues.apache.org/jira/browse/HBASE-12298 > Project: HBase > Issue Type: Sub-task > Components: regionserver, Scanners >Reporter: Anoop Sam John >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-12298.patch, HBASE-12298_1.patch, > HBASE-12298_2.patch, HBASE-12298_3.patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4.patch, HBASE-12298_4.patch, > HBASE-12298_4.patch, HBASE-12298_4.patch, HBASE-12298_5.patch, > HBASE-12298_6.patch, HBASE-12298_7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12298) Support BB usage in PrefixTree
[ https://issues.apache.org/jira/browse/HBASE-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-12298: --- Status: Open (was: Patch Available) > Support BB usage in PrefixTree > -- > > Key: HBASE-12298 > URL: https://issues.apache.org/jira/browse/HBASE-12298 > Project: HBase > Issue Type: Sub-task > Components: regionserver, Scanners >Reporter: Anoop Sam John >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-12298.patch, HBASE-12298_1.patch, > HBASE-12298_2.patch, HBASE-12298_3.patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4.patch, HBASE-12298_4.patch, > HBASE-12298_4.patch, HBASE-12298_4.patch, HBASE-12298_5.patch, > HBASE-12298_6.patch, HBASE-12298_7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12298) Support BB usage in PrefixTree
[ https://issues.apache.org/jira/browse/HBASE-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-12298: --- Status: Patch Available (was: Open) > Support BB usage in PrefixTree > -- > > Key: HBASE-12298 > URL: https://issues.apache.org/jira/browse/HBASE-12298 > Project: HBase > Issue Type: Sub-task > Components: regionserver, Scanners >Reporter: Anoop Sam John >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-12298.patch, HBASE-12298_1.patch, > HBASE-12298_2.patch, HBASE-12298_3.patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4.patch, HBASE-12298_4.patch, > HBASE-12298_4.patch, HBASE-12298_4.patch, HBASE-12298_5.patch, > HBASE-12298_6.patch, HBASE-12298_7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14442) MultiTableInputFormatBase.getSplits dosenot build split for a scan whose startRow=stopRow=(startRow of a region)
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan updated HBASE-14442: --- Summary: MultiTableInputFormatBase.getSplits dosenot build split for a scan whose startRow=stopRow=(startRow of a region) (was: MultiTableInputFormatBase.getSplits dosenot build split for startRow=stopRow=startrow of a region) > MultiTableInputFormatBase.getSplits dosenot build split for a scan whose > startRow=stopRow=(startRow of a region) > > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-8810) Bring in code constants in line with default xml's
[ https://issues.apache.org/jira/browse/HBASE-8810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark resolved HBASE-8810. -- Resolution: Not A Problem Fixed most of these in other issues. > Bring in code constants in line with default xml's > -- > > Key: HBASE-8810 > URL: https://issues.apache.org/jira/browse/HBASE-8810 > Project: HBase > Issue Type: Bug > Components: Usability >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: 8810.txt, 8810v2.txt, HBaseDefaultXMLConstants.java, > hbase-default_to_java_constants.xsl > > > After the defaults were changed in the xml some constants were left the same. > DEFAULT_HBASE_CLIENT_PAUSE for example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747231#comment-14747231 ] Maddineni Sukumar commented on HBASE-13770: --- Verified patch in my local machine and its working fine(of course some white space warnings). Can someone please tell me what to do to resolve this pre commit build job issue. > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14400) Fix HBase RPC protection documentation
[ https://issues.apache.org/jira/browse/HBASE-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747238#comment-14747238 ] Hudson commented on HBASE-14400: FAILURE: Integrated in HBase-0.98 #1123 (See [https://builds.apache.org/job/HBase-0.98/1123/]) HBASE-14400 Fix HBase RPC protection documentation (apurtell: rev 83f0b70c541a96e2a2bd4b22c17b983d2e35bd1e) * hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java * hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java * src/main/asciidoc/_chapters/security.adoc * hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java > Fix HBase RPC protection documentation > -- > > Key: HBASE-14400 > URL: https://issues.apache.org/jira/browse/HBASE-14400 > Project: HBase > Issue Type: Bug > Components: encryption, rpc, security >Reporter: Apekshit Sharma >Assignee: Apekshit Sharma >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-14400-branch-0.98.patch, > HBASE-14400-branch-1.0.patch, HBASE-14400-branch-1.1.patch, > HBASE-14400-branch-1.2.patch, HBASE-14400-master-v2.patch, > HBASE-14400-master.patch > > > HBase configuration 'hbase.rpc.protection' can be set to 'authentication', > 'integrity' or 'privacy'. > "authentication means authentication only and no integrity or privacy; > integrity implies > authentication and integrity are enabled; and privacy implies all of > authentication, integrity and privacy are enabled." > However hbase ref guide incorrectly suggests in some places to set the value > to 'auth-conf' instead of 'privacy'. Setting value to 'auth-conf' doesn't > provide rpc encryption which is what user wants. > This jira will fix: > - documentation: change 'auth-conf' references to 'privacy' > - SaslUtil to support both set of values (privacy/integrity/authentication > and auth-conf/auth-int/auth) to be backward compatible with what was being > suggested till now. > - change 'hbase.thrift.security.qop' to be consistent with other similar > configurations by using same set of values (privacy/integrity/authentication). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747250#comment-14747250 ] Ashish Singhi commented on HBASE-13770: --- [~sukuna...@gmail.com], did u follow the procedure mentioned [here |http://hbase.apache.org/book.html#submitting.patches] > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14437) ArithmeticException in ReplicationInterClusterEndpoint
[ https://issues.apache.org/jira/browse/HBASE-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-14437: --- Attachment: HBASE-14437.patch Simple patch that does not calculate the 'n' when there are no active sinks. > ArithmeticException in ReplicationInterClusterEndpoint > -- > > Key: HBASE-14437 > URL: https://issues.apache.org/jira/browse/HBASE-14437 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Minor > Attachments: HBASE-14437.patch > > > {code} > 2015-09-15 21:49:36,923 WARN > [ReplicationExecutor-0.replicationSource,1-stobdtserver1,16041,1442333166156.replicationSource.stobdtserver1%2C16041%2C1442333166156.default,1-stobdtserver1,16041,1442333166156] > regionserver.ReplicationSource: > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint > threw unknown exception:java.lang.ArithmeticException: / by zero > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:178) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.shipEdits(ReplicationSource.java:906) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:616) > {code} > This happened where a two node cluster set up with one acting as a master and > the other peer. The peer cluster went down and this warning log msg started > coming the main cluster RS logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14437) ArithmeticException in ReplicationInterClusterEndpoint
[ https://issues.apache.org/jira/browse/HBASE-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-14437: --- Status: Patch Available (was: Open) > ArithmeticException in ReplicationInterClusterEndpoint > -- > > Key: HBASE-14437 > URL: https://issues.apache.org/jira/browse/HBASE-14437 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Minor > Attachments: HBASE-14437.patch > > > {code} > 2015-09-15 21:49:36,923 WARN > [ReplicationExecutor-0.replicationSource,1-stobdtserver1,16041,1442333166156.replicationSource.stobdtserver1%2C16041%2C1442333166156.default,1-stobdtserver1,16041,1442333166156] > regionserver.ReplicationSource: > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint > threw unknown exception:java.lang.ArithmeticException: / by zero > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:178) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.shipEdits(ReplicationSource.java:906) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:616) > {code} > This happened where a two node cluster set up with one acting as a master and > the other peer. The peer cluster went down and this warning log msg started > coming the main cluster RS logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747302#comment-14747302 ] Maddineni Sukumar commented on HBASE-13770: --- [~ashish singhi] Thanks for quick response. Created new patch file using make_patch.sh file specified in "create patch" section. > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch, HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747308#comment-14747308 ] Hadoop QA commented on HBASE-13770: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756226/HBASE-13770-v2.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756226 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15622//console This message is automatically generated. > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch, HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768833#comment-14768833 ] Maddineni Sukumar commented on HBASE-13770: --- I have created it on 0.98 branch > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch, HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14400) Fix HBase RPC protection documentation
[ https://issues.apache.org/jira/browse/HBASE-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747309#comment-14747309 ] Hudson commented on HBASE-14400: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1076 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1076/]) HBASE-14400 Fix HBase RPC protection documentation (apurtell: rev 83f0b70c541a96e2a2bd4b22c17b983d2e35bd1e) * hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java * hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java * hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java * src/main/asciidoc/_chapters/security.adoc > Fix HBase RPC protection documentation > -- > > Key: HBASE-14400 > URL: https://issues.apache.org/jira/browse/HBASE-14400 > Project: HBase > Issue Type: Bug > Components: encryption, rpc, security >Reporter: Apekshit Sharma >Assignee: Apekshit Sharma >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-14400-branch-0.98.patch, > HBASE-14400-branch-1.0.patch, HBASE-14400-branch-1.1.patch, > HBASE-14400-branch-1.2.patch, HBASE-14400-master-v2.patch, > HBASE-14400-master.patch > > > HBase configuration 'hbase.rpc.protection' can be set to 'authentication', > 'integrity' or 'privacy'. > "authentication means authentication only and no integrity or privacy; > integrity implies > authentication and integrity are enabled; and privacy implies all of > authentication, integrity and privacy are enabled." > However hbase ref guide incorrectly suggests in some places to set the value > to 'auth-conf' instead of 'privacy'. Setting value to 'auth-conf' doesn't > provide rpc encryption which is what user wants. > This jira will fix: > - documentation: change 'auth-conf' references to 'privacy' > - SaslUtil to support both set of values (privacy/integrity/authentication > and auth-conf/auth-int/auth) to be backward compatible with what was being > suggested till now. > - change 'hbase.thrift.security.qop' to be consistent with other similar > configurations by using same set of values (privacy/integrity/authentication). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-10449) Wrong execution pool configuration in HConnectionManager
[ https://issues.apache.org/jira/browse/HBASE-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768844#comment-14768844 ] Nicolas Liochon commented on HBASE-10449: - What's happening for the expire is: - we have a 60s timeout with 256 seconds. - let's imagine we have 1 query per second. We will still have 60 threads, because each new request will create a new thread until we reach coreSize. As the timeout is 60s, the oldest threads will expire after 60s. I haven't double-checked, but I believe that the threads are needed because of the old i/o pattern. So we do need a max in the x00 range (it's like this since 0.90 at least. In theory, it's good for small cluster (100 nodes), but not as good if the cluster is composed of thousands of nodes) I did actually spent some time on this a year ago, in HBASE-11590. @stack, what do you think of the approach? I can finish the work I started there. But I will need a review. There are also some ideas/hacks in http://stackoverflow.com/questions/19528304/how-to-get-the-threadpoolexecutor-to-increase-threads-to-max-before-queueing/19528305#19528305 I haven't reviewed them yet. > Wrong execution pool configuration in HConnectionManager > > > Key: HBASE-10449 > URL: https://issues.apache.org/jira/browse/HBASE-10449 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon >Priority: Critical > Fix For: 0.98.0, 0.96.2, 0.99.0 > > Attachments: HBASE-10449.v1.patch > > > There is a confusion in the configuration of the pool. The attached patch > fixes this. This may change the client performances, as we were using a > single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14437) ArithmeticException in ReplicationInterClusterEndpoint
[ https://issues.apache.org/jira/browse/HBASE-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747303#comment-14747303 ] Ted Yu commented on HBASE-14437: Lgtm > ArithmeticException in ReplicationInterClusterEndpoint > -- > > Key: HBASE-14437 > URL: https://issues.apache.org/jira/browse/HBASE-14437 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Minor > Attachments: HBASE-14437.patch > > > {code} > 2015-09-15 21:49:36,923 WARN > [ReplicationExecutor-0.replicationSource,1-stobdtserver1,16041,1442333166156.replicationSource.stobdtserver1%2C16041%2C1442333166156.default,1-stobdtserver1,16041,1442333166156] > regionserver.ReplicationSource: > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint > threw unknown exception:java.lang.ArithmeticException: / by zero > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:178) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.shipEdits(ReplicationSource.java:906) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:616) > {code} > This happened where a two node cluster set up with one acting as a master and > the other peer. The peer cluster went down and this warning log msg started > coming the main cluster RS logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14278) Fix NPE that is showing up since HBASE-14274 went in
[ https://issues.apache.org/jira/browse/HBASE-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747242#comment-14747242 ] Hadoop QA commented on HBASE-14278: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756184/HBASE-14278-v5.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756184 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15617//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15617//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15617//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15617//console This message is automatically generated. > Fix NPE that is showing up since HBASE-14274 went in > > > Key: HBASE-14278 > URL: https://issues.apache.org/jira/browse/HBASE-14278 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 2.0.0, 1.2.0, 1.3.0 >Reporter: stack >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: HBASE-14278-v1.patch, HBASE-14278-v2.patch, > HBASE-14278-v3.patch, HBASE-14278-v4.patch, HBASE-14278-v5.patch, > HBASE-14278.patch > > > Saw this in TestDistributedLogSplitting after HBASE-14274 was applied. > {code} > 119113 2015-08-20 15:31:10,704 WARN [HBase-Metrics2-1] > impl.MetricsConfig(124): Cannot locate configuration: tried > hadoop-metrics2-hbase.properties,hadoop-metrics2.properties > 119114 2015-08-20 15:31:10,710 ERROR [HBase-Metrics2-1] > lib.MethodMetric$2(118): Error invoking method getBlocksTotal > 119115 java.lang.reflect.InvocationTargetException > 119116 › at sun.reflect.GeneratedMethodAccessor72.invoke(Unknown Source) > 119117 › at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 119118 › at java.lang.reflect.Method.invoke(Method.java:606) > 119119 › at > org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) > 119120 › at > org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) > 119121 › at > org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387) > 119122 › at > org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) > 119123 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195) > 119124 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) > 119125 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) > 119126 › at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > 119127 › at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > 119128 › at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > 119129 › at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) > 119130 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:221) > 119131 › at >
[jira] [Updated] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maddineni Sukumar updated HBASE-13770: -- Attachment: HBASE-13770-v2.patch Created patch file using make_patch.sh file. > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch, HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails
[ https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768427#comment-14768427 ] Hadoop QA commented on HBASE-14280: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756210/HBASE-14280_v3.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756210 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:red}-1 checkstyle{color}. The applied patch generated 1837 checkstyle errors (more than the master's current 1835 errors). {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 3 zombie test(s): at org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:287) at org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:261) at org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:194) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15620//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15620//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15620//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15620//console This message is automatically generated. > Bulk Upload from HA cluster to remote HA hbase cluster fails > > > Key: HBASE-14280 > URL: https://issues.apache.org/jira/browse/HBASE-14280 > Project: HBase > Issue Type: Bug > Components: hadoop2, regionserver >Affects Versions: 0.98.4 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Minor > Labels: easyfix, patch > Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch, > HBASE-14280_v3.patch > > > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132) > at >
[jira] [Commented] (HBASE-14400) Fix HBase RPC protection documentation
[ https://issues.apache.org/jira/browse/HBASE-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747253#comment-14747253 ] Hudson commented on HBASE-14400: FAILURE: Integrated in HBase-1.3-IT #159 (See [https://builds.apache.org/job/HBase-1.3-IT/159/]) HBASE-14400 Fix HBase RPC protection documentation (apurtell: rev 1517deee67fb9cd920faa146237f41049fc2ef60) * hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java * hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslClientHandler.java * hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java * src/main/asciidoc/_chapters/security.adoc * hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java > Fix HBase RPC protection documentation > -- > > Key: HBASE-14400 > URL: https://issues.apache.org/jira/browse/HBASE-14400 > Project: HBase > Issue Type: Bug > Components: encryption, rpc, security >Reporter: Apekshit Sharma >Assignee: Apekshit Sharma >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-14400-branch-0.98.patch, > HBASE-14400-branch-1.0.patch, HBASE-14400-branch-1.1.patch, > HBASE-14400-branch-1.2.patch, HBASE-14400-master-v2.patch, > HBASE-14400-master.patch > > > HBase configuration 'hbase.rpc.protection' can be set to 'authentication', > 'integrity' or 'privacy'. > "authentication means authentication only and no integrity or privacy; > integrity implies > authentication and integrity are enabled; and privacy implies all of > authentication, integrity and privacy are enabled." > However hbase ref guide incorrectly suggests in some places to set the value > to 'auth-conf' instead of 'privacy'. Setting value to 'auth-conf' doesn't > provide rpc encryption which is what user wants. > This jira will fix: > - documentation: change 'auth-conf' references to 'privacy' > - SaslUtil to support both set of values (privacy/integrity/authentication > and auth-conf/auth-int/auth) to be backward compatible with what was being > suggested till now. > - change 'hbase.thrift.security.qop' to be consistent with other similar > configurations by using same set of values (privacy/integrity/authentication). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12298) Support BB usage in PrefixTree
[ https://issues.apache.org/jira/browse/HBASE-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768851#comment-14768851 ] Hadoop QA commented on HBASE-12298: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756195/HBASE-12298_7.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756195 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 32 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15619//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15619//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15619//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15619//console This message is automatically generated. > Support BB usage in PrefixTree > -- > > Key: HBASE-12298 > URL: https://issues.apache.org/jira/browse/HBASE-12298 > Project: HBase > Issue Type: Sub-task > Components: regionserver, Scanners >Reporter: Anoop Sam John >Assignee: ramkrishna.s.vasudevan > Attachments: HBASE-12298.patch, HBASE-12298_1.patch, > HBASE-12298_2.patch, HBASE-12298_3.patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, HBASE-12298_4 (1).patch, > HBASE-12298_4 (1).patch, HBASE-12298_4.patch, HBASE-12298_4.patch, > HBASE-12298_4.patch, HBASE-12298_4.patch, HBASE-12298_5.patch, > HBASE-12298_6.patch, HBASE-12298_7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan
[ https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747288#comment-14747288 ] ramkrishna.s.vasudevan commented on HBASE-14221: Ping for reviews. I would like to extend this idea for Multi CFs also but I thought once this idea is fine can work on it. I already tried working on that but had some issues with the way fake keys were created. Worth trying though. > Reduce the number of time row comparison is done in a Scan > -- > > Key: HBASE-14221 > URL: https://issues.apache.org/jira/browse/HBASE-14221 > Project: HBase > Issue Type: Sub-task > Components: Scanners >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-14221.patch, HBASE-14221_1.patch, > HBASE-14221_1.patch, withmatchingRowspatch.png, withoutmatchingRowspatch.png > > > When we tried to do some profiling with the PE tool found this. > Currently we do row comparisons in 3 places in a simple Scan case. > 1) ScanQueryMatcher > {code} >int ret = this.rowComparator.compareRows(curCell, cell); > if (!this.isReversed) { > if (ret <= -1) { > return MatchCode.DONE; > } else if (ret >= 1) { > // could optimize this, if necessary? > // Could also be called SEEK_TO_CURRENT_ROW, but this > // should be rare/never happens. > return MatchCode.SEEK_NEXT_ROW; > } > } else { > if (ret <= -1) { > return MatchCode.SEEK_NEXT_ROW; > } else if (ret >= 1) { > return MatchCode.DONE; > } > } > {code} > 2) In StoreScanner next() while starting to scan the row > {code} > if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || > matcher.curCell == null || > isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) { > this.countPerRow = 0; > matcher.setToNewRow(peeked); > } > {code} > Particularly to see if we are in a new row. > 3) In HRegion > {code} > scannerContext.setKeepProgress(true); > heap.next(results, scannerContext); > scannerContext.setKeepProgress(tmpKeepProgress); > nextKv = heap.peek(); > moreCellsInRow = moreCellsInRow(nextKv, currentRowCell); > {code} > Here again there are cases where we need to careful for a MultiCF case. Was > trying to solve this for the MultiCF case but is having lot of cases to > solve. But atleast for a single CF case I think these comparison can be > reduced. > So for a single CF case in the SQM we are able to find if we have crossed a > row using the code pasted above in SQM. That comparison is definitely needed. > Now in case of a single CF the HRegion is going to have only one element in > the heap and so the 3rd comparison can surely be avoided if the > StoreScanner.next() was over due to MatchCode.DONE caused by SQM. > Coming to the 2nd compareRows that we do in StoreScanner. next() - even that > can be avoided if we know that the previous next() call was over due to a new > row. Doing all this I found that the compareRows in the profiler which was > 19% got reduced to 13%. Initially we can solve for single CF case which can > be extended to MultiCF cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747322#comment-14747322 ] Ashish Singhi commented on HBASE-13770: --- Looks like you are not creating the patch on the latest code of master branch ? > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch, HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14443) Add request parameter to the TooSlow/TooLarge warn message of RpcServer
Jianwei Cui created HBASE-14443: --- Summary: Add request parameter to the TooSlow/TooLarge warn message of RpcServer Key: HBASE-14443 URL: https://issues.apache.org/jira/browse/HBASE-14443 Project: HBase Issue Type: Improvement Components: rpc Reporter: Jianwei Cui Priority: Minor Fix For: 1.2.1 The RpcServer will log a warn message for TooSlow or TooLarge request as: {code} logResponse(new Object[]{param}, md.getName(), md.getName() + "(" + param.getClass().getName() + ")", (tooLarge ? "TooLarge" : "TooSlow"), status.getClient(), startTime, processingTime, qTime, responseSize); {code} The RpcServer#logResponse will create the warn message as: {code} if (params.length == 2 && server instanceof HRegionServer && params[0] instanceof byte[] && params[1] instanceof Operation) { ... responseInfo.putAll(((Operation) params[1]).toMap()); ... } else if (params.length == 1 && server instanceof HRegionServer && params[0] instanceof Operation) { ... responseInfo.putAll(((Operation) params[0]).toMap()); ... } else { ... } {code} Because the parameter is always a protobuf message, not an instance of Operation, the request parameter will not be added into the warn message. The parameter is helpful to find out the problem, for example, knowing the startRow/endRow is useful for a TooSlow scan. To improve the warn message, we can transform the protobuf request message to corresponding Operation subclass object by ProtobufUtil, so that it can be added the warn message. Suggestion and discussion are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14444) "Home" link at home page of Master/RegionServer/Thrift is redirecting to jsp file
Pankaj Kumar created HBASE-1: Summary: "Home" link at home page of Master/RegionServer/Thrift is redirecting to jsp file Key: HBASE-1 URL: https://issues.apache.org/jira/browse/HBASE-1 Project: HBase Issue Type: Bug Components: UI Reporter: Pankaj Kumar Assignee: Pankaj Kumar Priority: Minor At home page of Master/RegionServer/Thrift UI, "Home" link is redirecting to jsp file. Home link is working fine from other page. We need to keep it same as "HBase Logo" link. In MasterStatusTmpl.jamon, "/master-status" should be configured instead of "/" {code} Home {code} In RSStatusTmpl.jamon, "/rs-status" should be configured instead of "/" {code} Home {code} In thrift.jsp, "/thrift.jsp" should be configured instead of "/" {code} Home {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768874#comment-14768874 ] Ashish Singhi commented on HBASE-13770: --- Then you need to include the branch name in the patch so that Hadoop QA will apply the patch against that branch. For eg: HBASE-13770-0.98.patch > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-v1.patch, HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-14444) "Home" link at home page of Master/RegionServer/Thrift is redirecting to jsp file
[ https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pankaj Kumar resolved HBASE-1. -- Resolution: Not A Problem Sorry, due to some local changes its happened in my environment. > "Home" link at home page of Master/RegionServer/Thrift is redirecting to jsp > file > - > > Key: HBASE-1 > URL: https://issues.apache.org/jira/browse/HBASE-1 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > > At home page of Master/RegionServer/Thrift UI, "Home" link is redirecting to > jsp file. Home link is working fine from other page. > We need to keep it same as "HBase Logo" link. > In MasterStatusTmpl.jamon, "/master-status" should be configured instead of > "/" > {code} > Home > {code} > In RSStatusTmpl.jamon, "/rs-status" should be configured instead of "/" > {code} > Home > {code} > In thrift.jsp, "/thrift.jsp" should be configured instead of "/" > {code} > Home > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14437) ArithmeticException in ReplicationInterClusterEndpoint
[ https://issues.apache.org/jira/browse/HBASE-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768955#comment-14768955 ] Hadoop QA commented on HBASE-14437: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756220/HBASE-14437.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756220 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15621//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15621//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15621//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15621//console This message is automatically generated. > ArithmeticException in ReplicationInterClusterEndpoint > -- > > Key: HBASE-14437 > URL: https://issues.apache.org/jira/browse/HBASE-14437 > Project: HBase > Issue Type: Bug >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Minor > Attachments: HBASE-14437.patch > > > {code} > 2015-09-15 21:49:36,923 WARN > [ReplicationExecutor-0.replicationSource,1-stobdtserver1,16041,1442333166156.replicationSource.stobdtserver1%2C16041%2C1442333166156.default,1-stobdtserver1,16041,1442333166156] > regionserver.ReplicationSource: > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint > threw unknown exception:java.lang.ArithmeticException: / by zero > at > org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.replicate(HBaseInterClusterReplicationEndpoint.java:178) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.shipEdits(ReplicationSource.java:906) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:616) > {code} > This happened where a two node cluster set up with one acting as a master and > the other peer. The peer cluster went down and this warning log msg started > coming the main cluster RS logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14411) Fix unit test failures when using multiwal as default WAL provider
[ https://issues.apache.org/jira/browse/HBASE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-14411: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 1.3.0 Status: Resolved (was: Patch Available) Thanks for the patch, Yu. > Fix unit test failures when using multiwal as default WAL provider > -- > > Key: HBASE-14411 > URL: https://issues.apache.org/jira/browse/HBASE-14411 > Project: HBase > Issue Type: Bug >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-14411.branch-1.patch, HBASE-14411.patch, > HBASE-14411_v2.patch > > > If we set hbase.wal.provider to multiwal in > hbase-server/src/test/resources/hbase-site.xml which allows us to use > BoundedRegionGroupingProvider in UT, we will observe below failures in > current code base: > {noformat} > Failed tests: > TestHLogRecordReader>TestWALRecordReader.testPartialRead:164 expected:<1> > but was:<2> > TestHLogRecordReader>TestWALRecordReader.testWALRecordReader:216 > expected:<2> but was:<3> > TestWALRecordReader.testPartialRead:164 expected:<1> but was:<2> > TestWALRecordReader.testWALRecordReader:216 expected:<2> but was:<3> > TestDistributedLogSplitting.testRecoveredEdits:276 edits dir should have > more than a single file in it. instead has 1 > TestAtomicOperation.testMultiRowMutationMultiThreads:499 expected:<0> but > was:<1> > TestHRegionServerBulkLoad.testAtomicBulkLoad:307 > Expected: is > but: was > TestLogRolling.testCompactionRecordDoesntBlockRolling:611 Should have WAL; > one table is not flushed expected:<1> but was:<0> > TestLogRolling.testLogRollOnDatanodeDeath:359 null > TestLogRolling.testLogRollOnPipelineRestart:472 Missing datanode should've > triggered a log roll > TestReplicationSourceManager.testLogRoll:237 expected:<6> but was:<7> > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 if > skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong number of files in the > archive log expected:<11> but was:<12> > TestWALSplit.testMovedWALDuringRecovery:810->retryOverHdfsProblem:793 > expected:<11> but was:<12> > TestWALSplit.testRetryOpenDuringRecovery:838->retryOverHdfsProblem:793 > expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 > if skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplitCompressed>TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong > number of files in the archive log expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testMovedWALDuringRecovery:810->TestWALSplit.retryOverHdfsProblem:793 > expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testRetryOpenDuringRecovery:838->TestWALSplit.retryOverHdfsProblem:793 > expected:<11> but was:<12> > {noformat} > While patch for HBASE-14306 could resolve failures of TestHLogRecordReader, > TestReplicationSourceManager and TestReplicationWALReaderManager, this JIRA > will focus on resolving the others -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maddineni Sukumar updated HBASE-13770: -- Attachment: HBASE-13770-0.98.patch Trying my luck one more time for the "Patch cannot applied" build issue :) > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-0.98.patch, HBASE-13770-v1.patch, > HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14280) Bulk Upload from HA cluster to remote HA hbase cluster fails
[ https://issues.apache.org/jira/browse/HBASE-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14768974#comment-14768974 ] Ted Yu commented on HBASE-14280: Were checkstyle warnings related to the patch ? > Bulk Upload from HA cluster to remote HA hbase cluster fails > > > Key: HBASE-14280 > URL: https://issues.apache.org/jira/browse/HBASE-14280 > Project: HBase > Issue Type: Bug > Components: hadoop2, regionserver >Affects Versions: 0.98.4 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Minor > Labels: easyfix, patch > Attachments: HBASE-14280_v1.0.patch, HBASE-14280_v2.patch, > HBASE-14280_v3.patch > > > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2113) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://ha-aggregation-nameservice1/hbase_upload/82c89692-6e78-46ef-bbea-c9e825318bfe/A/131358d641c69d6c34b803c187b0, > expected: hdfs://ha-hbase-nameservice1 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136) > at > org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1423) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:372) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:451) > at > org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:750) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4894) > at > org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:4799) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3377) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29996) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078) > ... 4 more > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498) > at > org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) > at > org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548) > ... 11 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14431) AsyncRpcClient#removeConnection() never removes connection from connections pool if server fails
[ https://issues.apache.org/jira/browse/HBASE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samir Ahmic updated HBASE-14431: Attachment: HBASE-14431.patch Here is patch fixing this issue. I have notice that we have some 50s pause in client between detecting that session has been reset (killing rs) and removing connection to this server from connections pool. I will probably open new ticket addressing this issue when i dig more info why this pause is so long > AsyncRpcClient#removeConnection() never removes connection from connections > pool if server fails > > > Key: HBASE-14431 > URL: https://issues.apache.org/jira/browse/HBASE-14431 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 2.0.0, 1.0.2, 1.1.2 >Reporter: Samir Ahmic >Assignee: Samir Ahmic >Priority: Critical > Attachments: HBASE-14431.patch > > > I was playing with master branch in distributed mode (3 rs + master + > backup_master) and notice strange behavior when i was testing this sequence > of events on single rs: /kill/start/run_balancer while client was writing > data to cluster (LoadTestTool). > I have notice that LTT fails with following: > {code} > 2015-09-09 11:05:58,364 INFO [main] client.AsyncProcess: #2, waiting for > some tasks to finish. Expected max=0, tasksInProgress=35 > Exception in thread "main" > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: BindException: 1 time, > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208) > at > org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1697) > at > org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:211) > {code} > After some digging and adding some more logging in code i have notice that > following condition in {code}AsyncRpcClient.removeConnection(AsyncRpcChannel > connection) {code} is never true: > {code} > if (connectionInPool == connection) { > {code} > causing that {code}AsyncRpcChannel{code} connection is never removed from > {code}connections{code} pool in case rs fails. > After changing this condition to: > {code} > if (connectionInPool.address.equals(connection.address)) { > {code} > issue was resolved and client was removing failed server from connections > pool. > I will attach patch after running some more tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14431) AsyncRpcClient#removeConnection() never removes connection from connections pool if server fails
[ https://issues.apache.org/jira/browse/HBASE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790514#comment-14790514 ] Ted Yu commented on HBASE-14431: lgtm nit: connection.hashCode() is computed twice. You can save the return value in a local variable. > AsyncRpcClient#removeConnection() never removes connection from connections > pool if server fails > > > Key: HBASE-14431 > URL: https://issues.apache.org/jira/browse/HBASE-14431 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 2.0.0, 1.0.2, 1.1.2 >Reporter: Samir Ahmic >Assignee: Samir Ahmic >Priority: Critical > Attachments: HBASE-14431.patch > > > I was playing with master branch in distributed mode (3 rs + master + > backup_master) and notice strange behavior when i was testing this sequence > of events on single rs: /kill/start/run_balancer while client was writing > data to cluster (LoadTestTool). > I have notice that LTT fails with following: > {code} > 2015-09-09 11:05:58,364 INFO [main] client.AsyncProcess: #2, waiting for > some tasks to finish. Expected max=0, tasksInProgress=35 > Exception in thread "main" > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: BindException: 1 time, > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208) > at > org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1697) > at > org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:211) > {code} > After some digging and adding some more logging in code i have notice that > following condition in {code}AsyncRpcClient.removeConnection(AsyncRpcChannel > connection) {code} is never true: > {code} > if (connectionInPool == connection) { > {code} > causing that {code}AsyncRpcChannel{code} connection is never removed from > {code}connections{code} pool in case rs fails. > After changing this condition to: > {code} > if (connectionInPool.address.equals(connection.address)) { > {code} > issue was resolved and client was removing failed server from connections > pool. > I will attach patch after running some more tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14431) AsyncRpcClient#removeConnection() never removes connection from connections pool if server fails
[ https://issues.apache.org/jira/browse/HBASE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samir Ahmic updated HBASE-14431: Status: Patch Available (was: Open) > AsyncRpcClient#removeConnection() never removes connection from connections > pool if server fails > > > Key: HBASE-14431 > URL: https://issues.apache.org/jira/browse/HBASE-14431 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.1.2, 1.0.2, 2.0.0 >Reporter: Samir Ahmic >Assignee: Samir Ahmic >Priority: Critical > Attachments: HBASE-14431.patch > > > I was playing with master branch in distributed mode (3 rs + master + > backup_master) and notice strange behavior when i was testing this sequence > of events on single rs: /kill/start/run_balancer while client was writing > data to cluster (LoadTestTool). > I have notice that LTT fails with following: > {code} > 2015-09-09 11:05:58,364 INFO [main] client.AsyncProcess: #2, waiting for > some tasks to finish. Expected max=0, tasksInProgress=35 > Exception in thread "main" > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: BindException: 1 time, > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208) > at > org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1697) > at > org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:211) > {code} > After some digging and adding some more logging in code i have notice that > following condition in {code}AsyncRpcClient.removeConnection(AsyncRpcChannel > connection) {code} is never true: > {code} > if (connectionInPool == connection) { > {code} > causing that {code}AsyncRpcChannel{code} connection is never removed from > {code}connections{code} pool in case rs fails. > After changing this condition to: > {code} > if (connectionInPool.address.equals(connection.address)) { > {code} > issue was resolved and client was removing failed server from connections > pool. > I will attach patch after running some more tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14443) Add request parameter to the TooSlow/TooLarge warn message of RpcServer
[ https://issues.apache.org/jira/browse/HBASE-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790528#comment-14790528 ] stack commented on HBASE-14443: --- Anything to make this stuff more useful is welcome (+1 on transform) > Add request parameter to the TooSlow/TooLarge warn message of RpcServer > --- > > Key: HBASE-14443 > URL: https://issues.apache.org/jira/browse/HBASE-14443 > Project: HBase > Issue Type: Improvement > Components: rpc >Reporter: Jianwei Cui >Priority: Minor > Fix For: 1.2.1 > > > The RpcServer will log a warn message for TooSlow or TooLarge request as: > {code} > logResponse(new Object[]{param}, > md.getName(), md.getName() + "(" + param.getClass().getName() + > ")", > (tooLarge ? "TooLarge" : "TooSlow"), > status.getClient(), startTime, processingTime, qTime, > responseSize); > {code} > The RpcServer#logResponse will create the warn message as: > {code} > if (params.length == 2 && server instanceof HRegionServer && > params[0] instanceof byte[] && > params[1] instanceof Operation) { > ... > responseInfo.putAll(((Operation) params[1]).toMap()); > ... > } else if (params.length == 1 && server instanceof HRegionServer && > params[0] instanceof Operation) { > ... > responseInfo.putAll(((Operation) params[0]).toMap()); > ... > } else { > ... > } > {code} > Because the parameter is always a protobuf message, not an instance of > Operation, the request parameter will not be added into the warn message. The > parameter is helpful to find out the problem, for example, knowing the > startRow/endRow is useful for a TooSlow scan. To improve the warn message, we > can transform the protobuf request message to corresponding Operation > subclass object by ProtobufUtil, so that it can be added the warn message. > Suggestion and discussion are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14433) Set down the client executor core thread count from 256 in tests
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-14433: -- Attachment: 14433v4.reapply.txt Here is what I reapplied under the rubric of this issue. It just changes the config for tests. I applied to 1.2+. > Set down the client executor core thread count from 256 in tests > > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, > 14433v4.reapply.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14433) Set down the client executor core thread count from 256 in tests
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-14433: -- Fix Version/s: 1.3.0 1.2.0 > Set down the client executor core thread count from 256 in tests > > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, > 14433v4.reapply.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14433) Set down the client executor core thread count from 256 to number of processors
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790588#comment-14790588 ] stack commented on HBASE-14433: --- Ok. Reverting the patch I applied last night because discussion ongoing over in HBASE-10449. I'm instead going to just set limits for tests only. > Set down the client executor core thread count from 256 to number of > processors > --- > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-10449) Wrong execution pool configuration in HConnectionManager
[ https://issues.apache.org/jira/browse/HBASE-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790587#comment-14790587 ] stack commented on HBASE-10449: --- Thanks [~nkeywal] bq. We should not see 256 threads, because they should expire already Maybe they spin up inside the keepalive time of 60 seconds. bq. We will still have 60 threads, because each new request will create a new thread until we reach coreSize Well, I was thinking that we'd go to core size -- say # of cores -- and then if one request a second, we'd just stay at core size because there would be a free thread when the request-per-second came in (assuming request took a good deal < a second). Let me look at HBASE-11590. What I saw was each client with hundreds -- up to 256 on one -- threads all in WAITING like follows: {code} "hconnection-0x3065a6a9-shared--pool13-t247" daemon prio=10 tid=0x7f31c1ab2000 nid=0x7718 waiting on condition [0x7f2f9ecec000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0007f841b388> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {code} ... usually in TestReplicasClient. Here is example: https://builds.apache.org/view/H-L/view/HBase/job/PreCommit-HBASE-Build/15581/consoleText See zombies on the end. I also have second thoughts on HBASE-114433. I am going to change it so we set config for tests only. We need to do more work before can set the core threads down from max is what I am thinking. Thanks [~nkeywal] I'll look at HBASE-11590. Didn't we have a mock server somewhere such that we could standup a client with no friction and watch it in operation? I thought we'd make such a beast > Wrong execution pool configuration in HConnectionManager > > > Key: HBASE-10449 > URL: https://issues.apache.org/jira/browse/HBASE-10449 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon >Priority: Critical > Fix For: 0.98.0, 0.96.2, 0.99.0 > > Attachments: HBASE-10449.v1.patch > > > There is a confusion in the configuration of the pool. The attached patch > fixes this. This may change the client performances, as we were using a > single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14278) Fix NPE that is showing up since HBASE-14274 went in
[ https://issues.apache.org/jira/browse/HBASE-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790710#comment-14790710 ] stack commented on HBASE-14278: --- kalashnikov:hbase.git.commit stack$ python dev-support/findHangingTests.py https://builds.apache.org/job/PreCommit-HBASE-Build/15617/consoleText Fetching the console output from the URL Printing hanging tests Printing Failing tests Failing test : org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat Failing test : org.apache.hadoop.hbase.client.TestReplicaWithCluster TestReplicaWithCluster I see is showing up as a hang. I'll take a look. The other failure loooks unrelated. I'll look at that too. +1 on patch. This emission is ugly currently spewing all over test runs. Thanks [~eclark] On commit, shove e.getMessage on the end of this log just so we can be sure it that old faithful, the NPE: 76 } catch (Exception e) { 77// Ignored. If this errors out it means that someone is double 78// closing the region source and the region is already nulled out. 79LOG.info("Error trying to remove " + toRemove + " from " + this.getClass().getSimpleName()); 80 } > Fix NPE that is showing up since HBASE-14274 went in > > > Key: HBASE-14278 > URL: https://issues.apache.org/jira/browse/HBASE-14278 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 2.0.0, 1.2.0, 1.3.0 >Reporter: stack >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: HBASE-14278-v1.patch, HBASE-14278-v2.patch, > HBASE-14278-v3.patch, HBASE-14278-v4.patch, HBASE-14278-v5.patch, > HBASE-14278.patch > > > Saw this in TestDistributedLogSplitting after HBASE-14274 was applied. > {code} > 119113 2015-08-20 15:31:10,704 WARN [HBase-Metrics2-1] > impl.MetricsConfig(124): Cannot locate configuration: tried > hadoop-metrics2-hbase.properties,hadoop-metrics2.properties > 119114 2015-08-20 15:31:10,710 ERROR [HBase-Metrics2-1] > lib.MethodMetric$2(118): Error invoking method getBlocksTotal > 119115 java.lang.reflect.InvocationTargetException > 119116 › at sun.reflect.GeneratedMethodAccessor72.invoke(Unknown Source) > 119117 › at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 119118 › at java.lang.reflect.Method.invoke(Method.java:606) > 119119 › at > org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) > 119120 › at > org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) > 119121 › at > org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387) > 119122 › at > org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79) > 119123 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195) > 119124 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172) > 119125 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151) > 119126 › at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > 119127 › at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > 119128 › at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > 119129 › at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) > 119130 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:221) > 119131 › at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:96) > 119132 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:245) > 119133 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$1.postStart(MetricsSystemImpl.java:229) > 119134 › at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) > 119135 › at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 119136 › at java.lang.reflect.Method.invoke(Method.java:606) > 119137 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$3.invoke(MetricsSystemImpl.java:290) > 119138 › at com.sun.proxy.$Proxy13.postStart(Unknown Source) > 119139 › at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:185) > 119140 › at > org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:81) > 119141 › at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > 119142 › at
[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer
[ https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790723#comment-14790723 ] stack commented on HBASE-12751: --- kalashnikov:hbase.git stack$ python ./dev-support/findHangingTests.py https://builds.apache.org/job/PreCommit-HBASE-Build/15616//consoleText Fetching the console output from the URL Printing hanging tests Hanging test : org.apache.hadoop.hbase.TestIOFencing Hanging test : org.apache.hadoop.hbase.master.TestDistributedLogSplitting Hanging test : org.apache.hadoop.hbase.master.procedure.TestServerCrashProcedure Hanging test : org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDistributedLogReplay Printing Failing tests Failing test : org.apache.hadoop.hbase.client.TestReplicasClient Let me look into these. > Allow RowLock to be reader writer > - > > Key: HBASE-12751 > URL: https://issues.apache.org/jira/browse/HBASE-12751 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0, 1.3.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.3.0 > > Attachments: 12751.rebased.v25.txt, 12751.rebased.v26.txt, > 12751.rebased.v26.txt, 12751.rebased.v27.txt, 12751.rebased.v29.txt, > 12751.rebased.v31.txt, 12751.rebased.v32.txt, 12751.rebased.v32.txt, > 12751.rebased.v33.txt, 12751.rebased.v34.txt, 12751.rebased.v35.txt, > 12751.rebased.v35.txt, 12751.rebased.v35.txt, 12751.v37.txt, 12751.v38.txt, > 12751v22.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, 12751v23.txt, > 12751v36.txt, HBASE-12751-v1.patch, HBASE-12751-v10.patch, > HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, > HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, > HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v18.patch, > HBASE-12751-v19 (1).patch, HBASE-12751-v19.patch, HBASE-12751-v2.patch, > HBASE-12751-v20.patch, HBASE-12751-v20.patch, HBASE-12751-v21.patch, > HBASE-12751-v3.patch, HBASE-12751-v4.patch, HBASE-12751-v5.patch, > HBASE-12751-v6.patch, HBASE-12751-v7.patch, HBASE-12751-v8.patch, > HBASE-12751-v9.patch, HBASE-12751.patch > > > Right now every write operation grabs a row lock. This is to prevent values > from changing during a read modify write operation (increment or check and > put). However it limits parallelism in several different scenarios. > If there are several puts to the same row but different columns or stores > then this is very limiting. > If there are puts to the same column then mvcc number should ensure a > consistent ordering. So locking is not needed. > However locking for check and put or increment is still needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14411) Fix unit test failures when using multiwal as default WAL provider
[ https://issues.apache.org/jira/browse/HBASE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790724#comment-14790724 ] Hudson commented on HBASE-14411: SUCCESS: Integrated in HBase-1.3-IT #160 (See [https://builds.apache.org/job/HBase-1.3-IT/160/]) HBASE-14411 Fix unit test failures when using multiwal as default WAL provider (Yu Li) (tedyu: rev 0452ba09b53fb450c913811b77d74b6035b40ce3) * hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java * hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java > Fix unit test failures when using multiwal as default WAL provider > -- > > Key: HBASE-14411 > URL: https://issues.apache.org/jira/browse/HBASE-14411 > Project: HBase > Issue Type: Bug >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-14411.branch-1.patch, HBASE-14411.patch, > HBASE-14411_v2.patch > > > If we set hbase.wal.provider to multiwal in > hbase-server/src/test/resources/hbase-site.xml which allows us to use > BoundedRegionGroupingProvider in UT, we will observe below failures in > current code base: > {noformat} > Failed tests: > TestHLogRecordReader>TestWALRecordReader.testPartialRead:164 expected:<1> > but was:<2> > TestHLogRecordReader>TestWALRecordReader.testWALRecordReader:216 > expected:<2> but was:<3> > TestWALRecordReader.testPartialRead:164 expected:<1> but was:<2> > TestWALRecordReader.testWALRecordReader:216 expected:<2> but was:<3> > TestDistributedLogSplitting.testRecoveredEdits:276 edits dir should have > more than a single file in it. instead has 1 > TestAtomicOperation.testMultiRowMutationMultiThreads:499 expected:<0> but > was:<1> > TestHRegionServerBulkLoad.testAtomicBulkLoad:307 > Expected: is > but: was > TestLogRolling.testCompactionRecordDoesntBlockRolling:611 Should have WAL; > one table is not flushed expected:<1> but was:<0> > TestLogRolling.testLogRollOnDatanodeDeath:359 null > TestLogRolling.testLogRollOnPipelineRestart:472 Missing datanode should've > triggered a log roll > TestReplicationSourceManager.testLogRoll:237 expected:<6> but was:<7> > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 if > skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong number of files in the > archive log expected:<11> but was:<12> > TestWALSplit.testMovedWALDuringRecovery:810->retryOverHdfsProblem:793 > expected:<11> but was:<12> > TestWALSplit.testRetryOpenDuringRecovery:838->retryOverHdfsProblem:793 > expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 > if skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplitCompressed>TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong > number of files in the archive log expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testMovedWALDuringRecovery:810->TestWALSplit.retryOverHdfsProblem:793 > expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testRetryOpenDuringRecovery:838->TestWALSplit.retryOverHdfsProblem:793 > expected:<11> but was:<12> > {noformat} > While patch for HBASE-14306 could resolve failures of TestHLogRecordReader, > TestReplicationSourceManager and TestReplicationWALReaderManager, this JIRA > will focus on resolving the others -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14334) Move Memcached block cache in to it's own optional module.
[ https://issues.apache.org/jira/browse/HBASE-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790729#comment-14790729 ] stack commented on HBASE-14334: --- +1 On commit, add more to this: HBase external block cache. The above is all the doc I'd see this module getting so say something about when it'd be used and how to enable it. Replicate as the release note on this issue. > Move Memcached block cache in to it's own optional module. > -- > > Key: HBASE-14334 > URL: https://issues.apache.org/jira/browse/HBASE-14334 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0 > > Attachments: HBASE-14334.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13770) Programmatic JAAS configuration option for secure zookeeper may be broken
[ https://issues.apache.org/jira/browse/HBASE-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790641#comment-14790641 ] Hadoop QA commented on HBASE-13770: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756251/HBASE-13770-0.98.patch against 0.98 branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756251 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 23 warning messages. {color:red}-1 checkstyle{color}. The applied patch generated 3873 checkstyle errors (more than the master's current 3869 errors). {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + public static final String ZK_CLIENT_KERBEROS_PRINCIPLE = "hbase.zookeeper.client.kerberos.principal"; + public static final String ZK_SERVER_KERBEROS_PRINCIPLE = "hbase.zookeeper.server.kerberos.principal"; {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15623//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15623//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15623//artifact/patchprocess/checkstyle-aggregate.html Javadoc warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15623//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15623//console This message is automatically generated. > Programmatic JAAS configuration option for secure zookeeper may be broken > - > > Key: HBASE-13770 > URL: https://issues.apache.org/jira/browse/HBASE-13770 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13, 1.2.0 >Reporter: Andrew Purtell >Assignee: Maddineni Sukumar > Fix For: 0.98.13 > > Attachments: HBASE-13770-0.98.patch, HBASE-13770-v1.patch, > HBASE-13770-v2.patch > > > While verifying the patch fix for HBASE-13768 we were unable to successfully > test the programmatic JAAS configuration option for secure ZooKeeper > integration. Unclear if that was due to a bug or incorrect test configuration. > Update the security section of the online book with clear instructions for > setting up the programmatic JAAS configuration option for secure ZooKeeper > integration. > Verify it works. > Fix as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14433) Set down the client executor core thread count from 256 in tests
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-14433: -- Release Note: Tests run with client executors that have core thread count of 4 and a keepalive of 3 seconds. They used to default to 256 core threads and 60 seconds for keepalive. (was: Change the client executor core thread count to be number of processors instead of 256: i.e. the equivalent of the maximum threads allowed on client. The config to set it back to 256 or any other value is "hbase.hconnection.threads.core". Also set it so core is set to default 4 threads in client core in tests (and keepalive is downed from a minute to 3 seconds).) Summary: Set down the client executor core thread count from 256 in tests (was: Set down the client executor core thread count from 256 to number of processors) > Set down the client executor core thread count from 256 in tests > > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14221) Reduce the number of time row comparison is done in a Scan
[ https://issues.apache.org/jira/browse/HBASE-14221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790692#comment-14790692 ] stack commented on HBASE-14221: --- bq. . But atleast for a single CF case I think these comparison can be reduced. How does this extend to the MultiCF case? So, about 10% difference for this added complexity? @larsh You are probably interested in this. Why need for two flags? Why not isSingleColumnFamily test not enough? When would we have a single store heap scanner but then a joined heap would have more than one? 5275// Indicates if the storeHeap is formed of only one StoreScanner 5276boolean singleStoreScannerHeap = false; 5277// Indicates if the joinedHeap is formed of only one StoreScanner. 5278boolean singleStoreScannerJoinedHeap = false; Why add a flag here? boolean moreValues = populateResult(results, this.joinedHeap, scannerContext, 5488 joinedContinuationRow); 5497 joinedContinuationRow, singleStoreScannerJoinedHeap); Why not just have the flag be in the scanner context? > Reduce the number of time row comparison is done in a Scan > -- > > Key: HBASE-14221 > URL: https://issues.apache.org/jira/browse/HBASE-14221 > Project: HBase > Issue Type: Sub-task > Components: Scanners >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-14221.patch, HBASE-14221_1.patch, > HBASE-14221_1.patch, withmatchingRowspatch.png, withoutmatchingRowspatch.png > > > When we tried to do some profiling with the PE tool found this. > Currently we do row comparisons in 3 places in a simple Scan case. > 1) ScanQueryMatcher > {code} >int ret = this.rowComparator.compareRows(curCell, cell); > if (!this.isReversed) { > if (ret <= -1) { > return MatchCode.DONE; > } else if (ret >= 1) { > // could optimize this, if necessary? > // Could also be called SEEK_TO_CURRENT_ROW, but this > // should be rare/never happens. > return MatchCode.SEEK_NEXT_ROW; > } > } else { > if (ret <= -1) { > return MatchCode.SEEK_NEXT_ROW; > } else if (ret >= 1) { > return MatchCode.DONE; > } > } > {code} > 2) In StoreScanner next() while starting to scan the row > {code} > if (!scannerContext.hasAnyLimit(LimitScope.BETWEEN_CELLS) || > matcher.curCell == null || > isNewRow || !CellUtil.matchingRow(peeked, matcher.curCell)) { > this.countPerRow = 0; > matcher.setToNewRow(peeked); > } > {code} > Particularly to see if we are in a new row. > 3) In HRegion > {code} > scannerContext.setKeepProgress(true); > heap.next(results, scannerContext); > scannerContext.setKeepProgress(tmpKeepProgress); > nextKv = heap.peek(); > moreCellsInRow = moreCellsInRow(nextKv, currentRowCell); > {code} > Here again there are cases where we need to careful for a MultiCF case. Was > trying to solve this for the MultiCF case but is having lot of cases to > solve. But atleast for a single CF case I think these comparison can be > reduced. > So for a single CF case in the SQM we are able to find if we have crossed a > row using the code pasted above in SQM. That comparison is definitely needed. > Now in case of a single CF the HRegion is going to have only one element in > the heap and so the 3rd comparison can surely be avoided if the > StoreScanner.next() was over due to MatchCode.DONE caused by SQM. > Coming to the 2nd compareRows that we do in StoreScanner. next() - even that > can be avoided if we know that the previous next() call was over due to a new > row. Doing all this I found that the compareRows in the profiler which was > 19% got reduced to 13%. Initially we can solve for single CF case which can > be extended to MultiCF cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-10449) Wrong execution pool configuration in HConnectionManager
[ https://issues.apache.org/jira/browse/HBASE-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790660#comment-14790660 ] Nicolas Liochon commented on HBASE-10449: - > I was thinking that we'd go to core size – say # of cores – and then if one > request a second, we'd just stay at core size because there would be a free > thread when the request-per-second came in (assuming request took a good deal > < a second). I expect that if we have more than coreSize calls in timeout (256 vs 60 seconds in our case) then we always have coreSize threads. > Didn't we have a mock server somewhere such that we could standup a client > with no friction and watch it in operation? I thought we'd make such a > beast Yep, you built one, we used it when we looked at the perf issues in the client (the protobuf nightmare if you remember ;:-)). > Wrong execution pool configuration in HConnectionManager > > > Key: HBASE-10449 > URL: https://issues.apache.org/jira/browse/HBASE-10449 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon >Priority: Critical > Fix For: 0.98.0, 0.96.2, 0.99.0 > > Attachments: HBASE-10449.v1.patch > > > There is a confusion in the configuration of the pool. The attached patch > fixes this. This may change the client performances, as we were using a > single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14411) Fix unit test failures when using multiwal as default WAL provider
[ https://issues.apache.org/jira/browse/HBASE-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790684#comment-14790684 ] Hudson commented on HBASE-14411: FAILURE: Integrated in HBase-1.3 #178 (See [https://builds.apache.org/job/HBase-1.3/178/]) HBASE-14411 Fix unit test failures when using multiwal as default WAL provider (Yu Li) (tedyu: rev 0452ba09b53fb450c913811b77d74b6035b40ce3) * hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java * hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java * hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java > Fix unit test failures when using multiwal as default WAL provider > -- > > Key: HBASE-14411 > URL: https://issues.apache.org/jira/browse/HBASE-14411 > Project: HBase > Issue Type: Bug >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-14411.branch-1.patch, HBASE-14411.patch, > HBASE-14411_v2.patch > > > If we set hbase.wal.provider to multiwal in > hbase-server/src/test/resources/hbase-site.xml which allows us to use > BoundedRegionGroupingProvider in UT, we will observe below failures in > current code base: > {noformat} > Failed tests: > TestHLogRecordReader>TestWALRecordReader.testPartialRead:164 expected:<1> > but was:<2> > TestHLogRecordReader>TestWALRecordReader.testWALRecordReader:216 > expected:<2> but was:<3> > TestWALRecordReader.testPartialRead:164 expected:<1> but was:<2> > TestWALRecordReader.testWALRecordReader:216 expected:<2> but was:<3> > TestDistributedLogSplitting.testRecoveredEdits:276 edits dir should have > more than a single file in it. instead has 1 > TestAtomicOperation.testMultiRowMutationMultiThreads:499 expected:<0> but > was:<1> > TestHRegionServerBulkLoad.testAtomicBulkLoad:307 > Expected: is > but: was > TestLogRolling.testCompactionRecordDoesntBlockRolling:611 Should have WAL; > one table is not flushed expected:<1> but was:<0> > TestLogRolling.testLogRollOnDatanodeDeath:359 null > TestLogRolling.testLogRollOnPipelineRestart:472 Missing datanode should've > triggered a log roll > TestReplicationSourceManager.testLogRoll:237 expected:<6> but was:<7> > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestReplicationWALReaderManager.test:155 null > TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 if > skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong number of files in the > archive log expected:<11> but was:<12> > TestWALSplit.testMovedWALDuringRecovery:810->retryOverHdfsProblem:793 > expected:<11> but was:<12> > TestWALSplit.testRetryOpenDuringRecovery:838->retryOverHdfsProblem:793 > expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs:594 > if skip.errors is false all files should remain in place expected:<11> but > was:<12> > TestWALSplitCompressed>TestWALSplit.testLogsGetArchivedAfterSplit:649 wrong > number of files in the archive log expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testMovedWALDuringRecovery:810->TestWALSplit.retryOverHdfsProblem:793 > expected:<11> but was:<12> > > TestWALSplitCompressed>TestWALSplit.testRetryOpenDuringRecovery:838->TestWALSplit.retryOverHdfsProblem:793 > expected:<11> but was:<12> > {noformat} > While patch for HBASE-14306 could resolve failures of TestHLogRecordReader, > TestReplicationSourceManager and TestReplicationWALReaderManager, this JIRA > will focus on resolving the others -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-10449) Wrong execution pool configuration in HConnectionManager
[ https://issues.apache.org/jira/browse/HBASE-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790698#comment-14790698 ] stack commented on HBASE-10449: --- bq. I expect that if we have more than coreSize calls in timeout (256 vs 60 seconds in our case) then we always have coreSize threads. Say again. I'm not following [~nkeywal] Thanks. bq. ...the protobuf nightmare if you remember Yes. Smile. Need to revive it for here and for doing client timeouts > Wrong execution pool configuration in HConnectionManager > > > Key: HBASE-10449 > URL: https://issues.apache.org/jira/browse/HBASE-10449 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 >Reporter: Nicolas Liochon >Assignee: Nicolas Liochon >Priority: Critical > Fix For: 0.98.0, 0.96.2, 0.99.0 > > Attachments: HBASE-10449.v1.patch > > > There is a confusion in the configuration of the pool. The attached patch > fixes this. This may change the client performances, as we were using a > single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14445) ExportSnapshot does not honor -chuser, -chgroup, -chmod options
Ted Yu created HBASE-14445: -- Summary: ExportSnapshot does not honor -chuser, -chgroup, -chmod options Key: HBASE-14445 URL: https://issues.apache.org/jira/browse/HBASE-14445 Project: HBase Issue Type: Bug Affects Versions: 0.98.4 Reporter: Ted Yu Create a snapshot of an existing HBase table, export the snapshot using the -chuser, -chgroup, -chmod options. Look in hdfs filesystem for export. The files do not have the correct ownership, group, permissions Thanks to Ian Roberts who first reported the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14334) Move Memcached block cache in to it's own optional module.
[ https://issues.apache.org/jira/browse/HBASE-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-14334: -- Attachment: HBASE-14334-v1.patch Patch with a better description. > Move Memcached block cache in to it's own optional module. > -- > > Key: HBASE-14334 > URL: https://issues.apache.org/jira/browse/HBASE-14334 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0 > > Attachments: HBASE-14334-v1.patch, HBASE-14334.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14352) Replication is terribly slow with WAL compression
[ https://issues.apache.org/jira/browse/HBASE-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791052#comment-14791052 ] Lars Hofhansl commented on HBASE-14352: --- I took a look at the code some weeks back. The problem immediately jumps out... At the source we constantly reset the read position into the current WAL. With compression it means we have start from a point where the compression dictionary is written. That is very expensive. We have to do that in order to be sure we'll see the edits in the current block being written. So I don't see immediately a way out of it. Perhaps we simply tail until we reach the end of a file. And that case we'll try one more time with a reset, and only declare the WAL done when that is done. > Replication is terribly slow with WAL compression > - > > Key: HBASE-14352 > URL: https://issues.apache.org/jira/browse/HBASE-14352 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.13 >Reporter: Abhishek Singh Chouhan > Attachments: age_of_last_shipped.png, size_of_log_queue.png > > > For the same load, replication with WAL compression enabled is almost 6x > slower than with compression turned off. Age of last shipped operation is > also correspondingly much higher when compression is turned on. > By observing Size of log queue we can see that it is taking too much time for > the queue to clear up. > Attaching corresponding graphs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14431) AsyncRpcClient#removeConnection() never removes connection from connections pool if server fails
[ https://issues.apache.org/jira/browse/HBASE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790813#comment-14790813 ] Hadoop QA commented on HBASE-14431: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756275/HBASE-14431.patch against master branch at commit d2e338181800ae3cef55ddca491901b65259dc7f. ATTACHMENT ID: 12756275 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestFastFail Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15624//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15624//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15624//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15624//console This message is automatically generated. > AsyncRpcClient#removeConnection() never removes connection from connections > pool if server fails > > > Key: HBASE-14431 > URL: https://issues.apache.org/jira/browse/HBASE-14431 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 2.0.0, 1.0.2, 1.1.2 >Reporter: Samir Ahmic >Assignee: Samir Ahmic >Priority: Critical > Attachments: HBASE-14431.patch > > > I was playing with master branch in distributed mode (3 rs + master + > backup_master) and notice strange behavior when i was testing this sequence > of events on single rs: /kill/start/run_balancer while client was writing > data to cluster (LoadTestTool). > I have notice that LTT fails with following: > {code} > 2015-09-09 11:05:58,364 INFO [main] client.AsyncProcess: #2, waiting for > some tasks to finish. Expected max=0, tasksInProgress=35 > Exception in thread "main" > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: BindException: 1 time, > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208) > at > org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1697) > at > org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:211) > {code} > After some digging and adding some more logging in code i have notice that > following condition in {code}AsyncRpcClient.removeConnection(AsyncRpcChannel > connection) {code} is never true: > {code} > if (connectionInPool == connection) { > {code} > causing that {code}AsyncRpcChannel{code} connection is never removed from > {code}connections{code} pool in case rs fails. > After changing this condition to: > {code} > if (connectionInPool.address.equals(connection.address)) { > {code} > issue was resolved and client was removing failed server from connections > pool. > I will attach patch after running some more tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14352) Replication is terribly slow with WAL compression
[ https://issues.apache.org/jira/browse/HBASE-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790911#comment-14790911 ] Abhishek Singh Chouhan commented on HBASE-14352: Yep...both of them had compression enabled. > Replication is terribly slow with WAL compression > - > > Key: HBASE-14352 > URL: https://issues.apache.org/jira/browse/HBASE-14352 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.13 >Reporter: Abhishek Singh Chouhan > Attachments: age_of_last_shipped.png, size_of_log_queue.png > > > For the same load, replication with WAL compression enabled is almost 6x > slower than with compression turned off. Age of last shipped operation is > also correspondingly much higher when compression is turned on. > By observing Size of log queue we can see that it is taking too much time for > the queue to clear up. > Attaching corresponding graphs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14433) Set down the client executor core thread count from 256 in tests
[ https://issues.apache.org/jira/browse/HBASE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791038#comment-14791038 ] Hudson commented on HBASE-14433: FAILURE: Integrated in HBase-TRUNK #6814 (See [https://builds.apache.org/job/HBase-TRUNK/6814/]) HBASE-14433 Set down the client executor core thread count from 256 in tests: REAPPLY AGAIN (WAS MISSING JIRA) (stack: rev bd26386dc7205c9b30b8488bc094bd380ec09adb) * hbase-server/src/test/resources/hbase-site.xml * hbase-client/src/test/resources/hbase-site.xml > Set down the client executor core thread count from 256 in tests > > > Key: HBASE-14433 > URL: https://issues.apache.org/jira/browse/HBASE-14433 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: 14433 (1).txt, 14433.txt, 14433v2.txt, 14433v3.txt, > 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, 14433v3.txt, > 14433v4.reapply.txt > > > HBASE-10449 upped our core count from 0 to 256 (max is 256). Looking in a > recent test run core dump, I see up to 256 threads per client and all are > idle. At a minimum it makes it hard reading test thread dumps. Trying to > learn more about why we went a core of 256 over in HBASE-10449. Meantime will > try setting down configs for test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14334) Move Memcached block cache in to it's own optional module.
[ https://issues.apache.org/jira/browse/HBASE-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790979#comment-14790979 ] Elliott Clark commented on HBASE-14334: --- bq.The above is all the doc I'd see this module getting so say something about when it'd be used and how to enable it. I'm still hoping to provide better. You know how that goes though. > Move Memcached block cache in to it's own optional module. > -- > > Key: HBASE-14334 > URL: https://issues.apache.org/jira/browse/HBASE-14334 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0 > > Attachments: HBASE-14334-v1.patch, HBASE-14334.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14442) MultiTableInputFormatBase.getSplits dosenot build split for a scan whose startRow=stopRow=(startRow of a region)
[ https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790833#comment-14790833 ] Nick Dimiduk commented on HBASE-14442: -- Hi Nathan, can you provide a unit test that demonstrates this bug? See https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java for existing tests. > MultiTableInputFormatBase.getSplits dosenot build split for a scan whose > startRow=stopRow=(startRow of a region) > > > Key: HBASE-14442 > URL: https://issues.apache.org/jira/browse/HBASE-14442 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 1.1.2 >Reporter: Nathan >Assignee: Nathan > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > I created a Scan whose startRow and stopRow are the same with a region's > startRow, then I found no map was built. > The following is the source code of this condtion: > (startRow.length == 0 || keys.getSecond()[i].length == 0 || > Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && > (stopRow.length == 0 || Bytes.compareTo(stopRow, > keys.getFirst()[i]) > 0) > I think a "=" should be added. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-14445) ExportSnapshot does not honor -chuser, -chgroup, -chmod options
[ https://issues.apache.org/jira/browse/HBASE-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-14445. Resolution: Duplicate > ExportSnapshot does not honor -chuser, -chgroup, -chmod options > --- > > Key: HBASE-14445 > URL: https://issues.apache.org/jira/browse/HBASE-14445 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.4 >Reporter: Ted Yu > > Create a snapshot of an existing HBase table, export the snapshot using the > -chuser, -chgroup, -chmod options. > Look in hdfs filesystem for export. The files do not have the correct > ownership, group, permissions > Thanks to Ian Roberts who first reported the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14445) ExportSnapshot does not honor -chuser, -chgroup, -chmod options
[ https://issues.apache.org/jira/browse/HBASE-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790963#comment-14790963 ] Matteo Bertozzi commented on HBASE-14445: - isn't this the same as HBASE-13250? > ExportSnapshot does not honor -chuser, -chgroup, -chmod options > --- > > Key: HBASE-14445 > URL: https://issues.apache.org/jira/browse/HBASE-14445 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.4 >Reporter: Ted Yu > > Create a snapshot of an existing HBase table, export the snapshot using the > -chuser, -chgroup, -chmod options. > Look in hdfs filesystem for export. The files do not have the correct > ownership, group, permissions > Thanks to Ian Roberts who first reported the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14431) AsyncRpcClient#removeConnection() never removes connection from connections pool if server fails
[ https://issues.apache.org/jira/browse/HBASE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791053#comment-14791053 ] Samir Ahmic commented on HBASE-14431: - This is interesting. I have run TestFastFail several times on two different machines and test never fails. I was using java 1.7.0_80 and 1.7.0_71 - > AsyncRpcClient#removeConnection() never removes connection from connections > pool if server fails > > > Key: HBASE-14431 > URL: https://issues.apache.org/jira/browse/HBASE-14431 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 2.0.0, 1.0.2, 1.1.2 >Reporter: Samir Ahmic >Assignee: Samir Ahmic >Priority: Critical > Attachments: HBASE-14431.patch > > > I was playing with master branch in distributed mode (3 rs + master + > backup_master) and notice strange behavior when i was testing this sequence > of events on single rs: /kill/start/run_balancer while client was writing > data to cluster (LoadTestTool). > I have notice that LTT fails with following: > {code} > 2015-09-09 11:05:58,364 INFO [main] client.AsyncProcess: #2, waiting for > some tasks to finish. Expected max=0, tasksInProgress=35 > Exception in thread "main" > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: BindException: 1 time, > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208) > at > org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1697) > at > org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:211) > {code} > After some digging and adding some more logging in code i have notice that > following condition in {code}AsyncRpcClient.removeConnection(AsyncRpcChannel > connection) {code} is never true: > {code} > if (connectionInPool == connection) { > {code} > causing that {code}AsyncRpcChannel{code} connection is never removed from > {code}connections{code} pool in case rs fails. > After changing this condition to: > {code} > if (connectionInPool.address.equals(connection.address)) { > {code} > issue was resolved and client was removing failed server from connections > pool. > I will attach patch after running some more tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14352) Replication is terribly slow with WAL compression
[ https://issues.apache.org/jira/browse/HBASE-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791070#comment-14791070 ] Andrew Purtell commented on HBASE-14352: When I've tested wal compression I've found the hit to write performance (increased latency leading to a lower aggregate write ceiling cluster-wide) to outweigh space savings and any gains from that. Is this the general experience? Maybe the answer is to deprecate WAL compression? > Replication is terribly slow with WAL compression > - > > Key: HBASE-14352 > URL: https://issues.apache.org/jira/browse/HBASE-14352 > Project: HBase > Issue Type: Bug >Affects Versions: 0.98.13 >Reporter: Abhishek Singh Chouhan > Attachments: age_of_last_shipped.png, size_of_log_queue.png > > > For the same load, replication with WAL compression enabled is almost 6x > slower than with compression turned off. Age of last shipped operation is > also correspondingly much higher when compression is turned on. > By observing Size of log queue we can see that it is taking too much time for > the queue to clear up. > Attaching corresponding graphs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-13250: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for the patch, Liangliang. > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-13250: --- Hadoop Flags: Reviewed Fix Version/s: 1.1.3 1.0.3 0.98.15 1.3.0 1.2.0 2.0.0 > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791120#comment-14791120 ] Hudson commented on HBASE-13250: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1077 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1077/]) HBASE-13250 chown of ExportSnapshot does not cover all path and files (He Liangliang) (tedyu: rev bcd986e47b8d633c996c8a2040c2a40b32cb5c59) * hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791204#comment-14791204 ] Hudson commented on HBASE-13250: FAILURE: Integrated in HBase-1.1 #665 (See [https://builds.apache.org/job/HBase-1.1/665/]) HBASE-13250 chown of ExportSnapshot does not cover all path and files (He Liangliang) (tedyu: rev a1f45c1c43dfda4b044f948d4de5089662aa306b) * hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14334) Move Memcached block cache in to it's own optional module.
[ https://issues.apache.org/jira/browse/HBASE-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791241#comment-14791241 ] Hadoop QA commented on HBASE-14334: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12756333/HBASE-14334-v1.patch against master branch at commit bd26386dc7205c9b30b8488bc094bd380ec09adb. ATTACHMENT ID: 12756333 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 5 new or modified tests. {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0 2.7.1) {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 protoc{color}. The applied patch does not increase the total number of protoc compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;> + ${project.build.directory}/test-classes/mrapp-generated-classpath + ${project.build.directory}/test-classes/mrapp-generated-classpath {color:green}+1 site{color}. The mvn post-site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestReplicationShell Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15626//testReport/ Release Findbugs (version 2.0.3)warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15626//artifact/patchprocess/newFindbugsWarnings.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15626//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15626//console This message is automatically generated. > Move Memcached block cache in to it's own optional module. > -- > > Key: HBASE-14334 > URL: https://issues.apache.org/jira/browse/HBASE-14334 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Fix For: 2.0.0, 1.2.0 > > Attachments: HBASE-14334-v1.patch, HBASE-14334.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791198#comment-14791198 ] Hudson commented on HBASE-13250: SUCCESS: Integrated in HBase-1.0 #1053 (See [https://builds.apache.org/job/HBase-1.0/1053/]) HBASE-13250 chown of ExportSnapshot does not cover all path and files (He Liangliang) (tedyu: rev e12b771560b94ee7843225af36f0857e6571a10a) * hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reopened HBASE-13250: Reverted from 0.98 due to compilation error against hadoop-1 profile > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13250) chown of ExportSnapshot does not cover all path and files
[ https://issues.apache.org/jira/browse/HBASE-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-13250: --- Fix Version/s: (was: 0.98.15) > chown of ExportSnapshot does not cover all path and files > - > > Key: HBASE-13250 > URL: https://issues.apache.org/jira/browse/HBASE-13250 > Project: HBase > Issue Type: Bug >Reporter: He Liangliang >Assignee: He Liangliang >Priority: Critical > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3 > > Attachments: HBASE-13250-V0.patch > > > The chuser/chgroup function only covers the leaf hfile. The ownership of > hfile parent paths and snapshot reference files are not changed as expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332)