[jira] [Commented] (HBASE-9906) Restore snapshot fails to restore the meta edits sporadically
[ https://issues.apache.org/jira/browse/HBASE-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817104#comment-13817104 ] Hudson commented on HBASE-9906: --- SUCCESS: Integrated in HBase-0.94 #1196 (See [https://builds.apache.org/job/HBase-0.94/1196/]) HBASE-9906 Restore snapshot fails to restore the meta edits sporadically (enis: rev 1539910) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/snapshot/RestoreSnapshotHandler.java Restore snapshot fails to restore the meta edits sporadically --- Key: HBASE-9906 URL: https://issues.apache.org/jira/browse/HBASE-9906 Project: HBase Issue Type: Bug Components: snapshots Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: hbase-9906-0.94_v1.patch, hbase-9906_v1.patch After snaphot restore, we see failures to find the table in meta: {code} disable 'tablefour' restore_snapshot 'snapshot_tablefour' enable 'tablefour' ERROR: Table tablefour does not exist.' {code} This is quite subtle. From the looks of it, we successfully restore the snapshot, do the meta updates, return to the client about the status. The client then tries to do an operation for the table (like enable table, or scan in the test outputs) which fails because the meta entry for the region seems to be gone (in case of single region, the table will be reported missing). Subsequent attempts for creating the table will also fail because the table directories will be there, but not the meta entries. For restoring meta entries, we are doing a delete then a put to the same region: {code} 2013-11-04 10:39:51,582 INFO org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to restore: 76d0e2b7ec3291afcaa82e18a56ccc30 2013-11-04 10:39:51,582 INFO org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to remove: fa41edf43fe3ee131db4a34b848ff432 ... 2013-11-04 10:39:52,102 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Deleted [{ENCODED = fa41edf43fe3ee131db4a34b848ff432, NAME = 'tablethree_mod,,1383559723345.fa41edf43fe3ee131db4a34b848ff432.', STARTKEY = '', ENDKEY = ''}, {ENCODED = 76d0e2b7ec3291afcaa82e18a56ccc30, NAME = 'tablethree_mod,,1383561123097.76d0e2b7ec3291afcaa82e18a56ccc30.', STARTKE 2013-11-04 10:39:52,111 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Added 1 {code} The root cause for this sporadic failure is that, the delete and subsequent put will have the same timestamp if they execute in the same ms. The delete will override the put in the same ts, even though the put have a larger ts. See: HBASE-9905, HBASE-8770 Credit goes to [~huned] for reporting this bug. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817103#comment-13817103 ] Hadoop QA commented on HBASE-9808: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612753/HBASE-9808-v3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.wal.TestLogRolling Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7788//console This message is automatically generated. org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation Key: HBASE-9808 URL: https://issues.apache.org/jira/browse/HBASE-9808 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Gustavo Anatoly Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, HBASE-9808-v3.patch, HBASE-9808.patch Here is list of JIRAs whose fixes might have gone into rest.PerformanceEvaluation : {code} r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line HBASE-9663 PerformanceEvaluation does not properly honor specified table name parameter r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line HBASE-9662 PerformanceEvaluation input do not handle tags properties r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line HBASE-9558 PerformanceEvaluation is in hbase-server, and creates a dependency to MiniDFSCluster r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line HBASE-9521 clean clearBufferOnFail behavior and deprecate it
[jira] [Commented] (HBASE-9920) Lower OK_FINDBUGS_WARNINGS in test-patch.properties
[ https://issues.apache.org/jira/browse/HBASE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817111#comment-13817111 ] Hadoop QA commented on HBASE-9920: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612766/9920.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7789//console This message is automatically generated. Lower OK_FINDBUGS_WARNINGS in test-patch.properties --- Key: HBASE-9920 URL: https://issues.apache.org/jira/browse/HBASE-9920 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Attachments: 9920.txt HBASE-9903 removed generated classes from findbugs checking. OK_FINDBUGS_WARNINGS in test-patch.properties should be lowered. According to https://builds.apache.org/job/PreCommit-HBASE-Build/7776/artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html , there were: 3 warnings for org.apache.hadoop.hbase.generated classes 19 warnings for org.apache.hadoop.hbase.tmpl classes -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9921) stripe compaction - findbugs and javadoc issues, some improvements
[ https://issues.apache.org/jira/browse/HBASE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817156#comment-13817156 ] Hadoop QA commented on HBASE-9921: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612763/HBASE-9921.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to cause Findbugs (version 1.3.9) to fail. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.wal.TestLogRolling Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7790//testReport/ Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7790//console This message is automatically generated. stripe compaction - findbugs and javadoc issues, some improvements -- Key: HBASE-9921 URL: https://issues.apache.org/jira/browse/HBASE-9921 Project: HBase Issue Type: Task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-9921.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9895) 0.96 Import utility can't import an exported file from 0.94
[ https://issues.apache.org/jira/browse/HBASE-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817162#comment-13817162 ] Hadoop QA commented on HBASE-9895: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612762/hbase-9895.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7791//console This message is automatically generated. 0.96 Import utility can't import an exported file from 0.94 --- Key: HBASE-9895 URL: https://issues.apache.org/jira/browse/HBASE-9895 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.96.0 Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: hbase-9895.patch Basically we PBed org.apache.hadoop.hbase.client.Result so a 0.96 cluster cannot import 0.94 exported files. This issue is annoying because a user can't import his old archive files after upgrade or archives from others who are using 0.94. The ideal way is to catch deserialization error and then fall back to 0.94 format for importing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-9902: -- Fix Version/s: 0.94.14 Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9902.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew) {. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-9902: -- Status: Open (was: Patch Available) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1 Attachments: HBASE-9902.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew) {. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-9902: -- Attachment: HBASE-9902_v2.patch Junit TC modified for different hostport for all regionservers test Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew) {. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-9902: -- Status: Patch Available (was: Open) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew) {. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817172#comment-13817172 ] Hadoop QA commented on HBASE-9117: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612789/HBASE-9117.00.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 90 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.client.TestHBaseAdminNoCluster.testMasterMonitorCollableRetries(TestHBaseAdminNoCluster.java:80) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7793//console This message is automatically generated. Remove HTablePool and all HConnection pooling related APIs -- Key: HBASE-9117 URL: https://issues.apache.org/jira/browse/HBASE-9117 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.98.0 Attachments: HBASE-9117.00.patch The recommended way is now: # Create an HConnection: HConnectionManager.createConnection(...) # Create a light HTable: HConnection.getTable(...) # table.close() # connection.close() All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException
[ https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] 刘泓 updated HBASE-9913: -- Description: java.lang.NullPointerException at java.io.File.init(File.java:222) at java.util.zip.ZipFile.init(ZipFile.java:75) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87) at com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163) at com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) my project deploy under weblogic11,and when i run hbase mapreduce,it throws a NullPointerException.i found the method TableMapReduceUtil.findContainingJar() returns null,so i debug it, url.getProtocol() return zip,but the file is a jar file,so the if condition: if (jar.equals(url.getProtocol())) cann't run. so i add a if condition to judge zip type was: java.lang.NullPointerException at java.io.File.init(File.java:222) at java.util.zip.ZipFile.init(ZipFile.java:75) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87) at com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163) at com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at
[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException
[ https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] 刘泓 updated HBASE-9913: -- Status: Patch Available (was: Reopened) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException Key: HBASE-9913 URL: https://issues.apache.org/jira/browse/HBASE-9913 Project: HBase Issue Type: Bug Components: hadoop2, mapreduce Affects Versions: 0.94.10 Environment: weblogic windows Reporter: 刘泓 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java java.lang.NullPointerException at java.io.File.init(File.java:222) at java.util.zip.ZipFile.init(ZipFile.java:75) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87) at com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163) at com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) my project deploy under weblogic11,and when i run hbase mapreduce,it throws a NullPointerException.i found the method TableMapReduceUtil.findContainingJar() returns null,so i debug it, url.getProtocol() return zip,but the file is a jar file,so the if condition: if (jar.equals(url.getProtocol())) cann't run. so i add a if condition to judge zip type -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9917) Fix it so Default Connection Pool does not spin up max threads even when not needed
[ https://issues.apache.org/jira/browse/HBASE-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817201#comment-13817201 ] Hadoop QA commented on HBASE-9917: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612779/9917.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7792//console This message is automatically generated. Fix it so Default Connection Pool does not spin up max threads even when not needed --- Key: HBASE-9917 URL: https://issues.apache.org/jira/browse/HBASE-9917 Project: HBase Issue Type: Sub-task Components: Client Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9917.txt, pool.txt Testing, I noticed that if we use the HConnection executor service as opposed to the executor service that is created when you create an HTable without passing in a connection: i.e HConnectionManager.createConnection(config).getTable(tableName) vs HTable(config, tableName) ... then we will spin up the max 256 threads and they will just hang out though not being used. We are encouraging HConnection#getTable over new HTable so worth fixing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9915) Performance: isSeeked() in EncodedScannerV2 always returns false
[ https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817216#comment-13817216 ] Hudson commented on HBASE-9915: --- SUCCESS: Integrated in hbase-0.96 #184 (See [https://builds.apache.org/job/hbase-0.96/184/]) HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false (larsh: rev 1539934) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java Performance: isSeeked() in EncodedScannerV2 always returns false Key: HBASE-9915 URL: https://issues.apache.org/jira/browse/HBASE-9915 Project: HBase Issue Type: Bug Components: Scanners Reporter: Lars Hofhansl Assignee: Lars Hofhansl Labels: performance Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: 9915-0.94.txt, 9915-trunk-v2.txt, 9915-trunk-v2.txt, 9915-trunk.txt, profile.png While debugging why reseek is so slow I found that it is quite broken for encoded scanners. The problem is this: AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was seeked or not. If it was it checks whether the KV we want to seek to is in the current block, if not it always consults the index blocks again. isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 and thus always returns false, which in turns causes an index lookup for each reseek. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9915) Performance: isSeeked() in EncodedScannerV2 always returns false
[ https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817230#comment-13817230 ] Hudson commented on HBASE-9915: --- FAILURE: Integrated in HBase-TRUNK #4674 (See [https://builds.apache.org/job/HBase-TRUNK/4674/]) HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false (larsh: rev 1539933) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java Performance: isSeeked() in EncodedScannerV2 always returns false Key: HBASE-9915 URL: https://issues.apache.org/jira/browse/HBASE-9915 Project: HBase Issue Type: Bug Components: Scanners Reporter: Lars Hofhansl Assignee: Lars Hofhansl Labels: performance Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: 9915-0.94.txt, 9915-trunk-v2.txt, 9915-trunk-v2.txt, 9915-trunk.txt, profile.png While debugging why reseek is so slow I found that it is quite broken for encoded scanners. The problem is this: AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was seeked or not. If it was it checks whether the KV we want to seek to is in the current block, if not it always consults the index blocks again. isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 and thus always returns false, which in turns causes an index lookup for each reseek. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9915) Performance: isSeeked() in EncodedScannerV2 always returns false
[ https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817235#comment-13817235 ] Hudson commented on HBASE-9915: --- SUCCESS: Integrated in HBase-0.94-security #331 (See [https://builds.apache.org/job/HBase-0.94-security/331/]) HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false (larsh: rev 1539936) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java Performance: isSeeked() in EncodedScannerV2 always returns false Key: HBASE-9915 URL: https://issues.apache.org/jira/browse/HBASE-9915 Project: HBase Issue Type: Bug Components: Scanners Reporter: Lars Hofhansl Assignee: Lars Hofhansl Labels: performance Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: 9915-0.94.txt, 9915-trunk-v2.txt, 9915-trunk-v2.txt, 9915-trunk.txt, profile.png While debugging why reseek is so slow I found that it is quite broken for encoded scanners. The problem is this: AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was seeked or not. If it was it checks whether the KV we want to seek to is in the current block, if not it always consults the index blocks again. isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 and thus always returns false, which in turns causes an index lookup for each reseek. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9924) avoid filename conflict in region_mover.rb
[ https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817238#comment-13817238 ] Hadoop QA commented on HBASE-9924: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612792/HBase-9924.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7794//console This message is automatically generated. avoid filename conflict in region_mover.rb -- Key: HBASE-9924 URL: https://issues.apache.org/jira/browse/HBASE-9924 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.96.0, 0.94.13 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBase-9924.txt when i worked at a shared/common box with my colleague, found this error while moving region: NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj (Permission denied) writeFile at /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283 unloadRegions at /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354 (root) at /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed. The root cause is currently getFilename in region move script will get the same output with diff users. One possible quick fix is just add the username to the filename. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817240#comment-13817240 ] Hadoop QA commented on HBASE-9902: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612809/HBASE-9902_v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7795//console This message is automatically generated. Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew)
[jira] [Updated] (HBASE-9913) weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException
[ https://issues.apache.org/jira/browse/HBASE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-9913: --- Status: Open (was: Patch Available) Hi, The Submit patch button is used to trigger the automatic compilation when a patch with the correct format, for the trunk version, is uploaded and attached to the ticket. This is not the case here. Therefor, I'm canceling the patch and re-opening the ticket. I think I get what you are trying to fix and if you want I will help you with the required steps to provide the patch. Just ask me. Please start reading this: http://hbase.apache.org/book/submitting.patches.html You will need to upload a .diff file for your patch. Should be done in trunk first, but you can start with 0.94 then upload the trunk version then click on submit patch. weblogic deployment project implementation under the mapreduce hbase reported a NullPointerException Key: HBASE-9913 URL: https://issues.apache.org/jira/browse/HBASE-9913 Project: HBase Issue Type: Bug Components: hadoop2, mapreduce Affects Versions: 0.94.10 Environment: weblogic windows Reporter: 刘泓 Attachments: TableMapReduceUtil.class, TableMapReduceUtil.java java.lang.NullPointerException at java.io.File.init(File.java:222) at java.util.zip.ZipFile.init(ZipFile.java:75) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.updateMap(TableMapReduceUtil.java:617) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.findOrCreateJar(TableMapReduceUtil.java:597) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:557) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:518) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:144) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:221) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:87) at com.easymap.ezserver6.map.source.hbase.convert.HBaseMapMerge.beginMerge(HBaseMapMerge.java:163) at com.easymap.ezserver6.app.servlet.EzMapToHbaseService.doPost(EzMapToHbaseService.java:32) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) my project deploy under weblogic11,and when i run hbase mapreduce,it throws a NullPointerException.i found the method TableMapReduceUtil.findContainingJar() returns null,so i debug it, url.getProtocol() return zip,but the file is a jar file,so the if condition: if (jar.equals(url.getProtocol())) cann't run. so i add a if condition to judge zip type -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9915) Performance: isSeeked() in EncodedScannerV2 always returns false
[ https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817255#comment-13817255 ] Hudson commented on HBASE-9915: --- SUCCESS: Integrated in HBase-0.94 #1197 (See [https://builds.apache.org/job/HBase-0.94/1197/]) HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false (larsh: rev 1539936) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java Performance: isSeeked() in EncodedScannerV2 always returns false Key: HBASE-9915 URL: https://issues.apache.org/jira/browse/HBASE-9915 Project: HBase Issue Type: Bug Components: Scanners Reporter: Lars Hofhansl Assignee: Lars Hofhansl Labels: performance Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: 9915-0.94.txt, 9915-trunk-v2.txt, 9915-trunk-v2.txt, 9915-trunk.txt, profile.png While debugging why reseek is so slow I found that it is quite broken for encoded scanners. The problem is this: AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was seeked or not. If it was it checks whether the KV we want to seek to is in the current block, if not it always consults the index blocks again. isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 and thus always returns false, which in turns causes an index lookup for each reseek. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9924) avoid filename conflict in region_mover.rb
[ https://issues.apache.org/jira/browse/HBASE-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817258#comment-13817258 ] Jean-Marc Spaggiari commented on HBASE-9924: Smal and straight forward. You might also want to change this: opts.on('-f', '--filename=FILE', 'File to save regions list into unloading, or read from loading; default /tmp/hostname') do |file| To specify that the default changed? avoid filename conflict in region_mover.rb -- Key: HBASE-9924 URL: https://issues.apache.org/jira/browse/HBASE-9924 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.96.0, 0.94.13 Reporter: Liang Xie Assignee: Liang Xie Attachments: HBase-9924.txt when i worked at a shared/common box with my colleague, found this error while moving region: NativeException: java.io.FileNotFoundException: /tmp/hh-hadoop-srv-st01.bj (Permission denied) writeFile at /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:283 unloadRegions at /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:354 (root) at /home/xieliang/infra/hbase/target/hbase-0.94.3-mdh1.0.0-SNAPSHOT/hbase-0.94.3-mdh1.0.0-SNAPSHOT/bin/region_mover.rb:480 2013-11-07 15:08:12 Unload host hh-hadoop-srv-st01.bj failed. The root cause is currently getFilename in region move script will get the same output with diff users. One possible quick fix is just add the username to the filename. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817262#comment-13817262 ] rajeshbabu commented on HBASE-9902: --- +1 Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew) {. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9920) Lower OK_FINDBUGS_WARNINGS in test-patch.properties
[ https://issues.apache.org/jira/browse/HBASE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817272#comment-13817272 ] Nicolas Liochon commented on HBASE-9920: +1 Lower OK_FINDBUGS_WARNINGS in test-patch.properties --- Key: HBASE-9920 URL: https://issues.apache.org/jira/browse/HBASE-9920 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Attachments: 9920.txt HBASE-9903 removed generated classes from findbugs checking. OK_FINDBUGS_WARNINGS in test-patch.properties should be lowered. According to https://builds.apache.org/job/PreCommit-HBASE-Build/7776/artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html , there were: 3 warnings for org.apache.hadoop.hbase.generated classes 19 warnings for org.apache.hadoop.hbase.tmpl classes -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.
[ https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S reassigned HBASE-9850: - Assignee: Kashif J S Assign for fix Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox. - Key: HBASE-9850 URL: https://issues.apache.org/jira/browse/HBASE-9850 Project: HBase Issue Type: Bug Components: UI Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9850.patch Steps: 1. create table with regions. 2. insert some amount of data in such a way that make some Hfiles which is lessthan min.compacted files size (say 3 hfiles are there but min compaction files 10) 3. from ui perform compact operation on the table. TABLE ACTION REQUEST Accepted page is displayed 4. operation is failing becoz compaction criteria is not meeting. but from ui compaction requests are continously sending to server. This happens using IE(history.back() seems to resend compact/split request). Firefox seems Ok in this case. 5. Later no automatic redirection to main hamster page occurs. SOLUTION: table.jsp in hbase master. The meta tag for HTML contains refresh with javascript:history.back(). A javascript cannot execute in the meta refresh tag like above in table.jsp and snapshot.jsp meta http-equiv=refresh content=5,javascript:history.back() / This will FAIL. W3 school also suggests to use refresh in JAVAscript rather than meta tag. If above meta is replaced as below, the behavior is OK verified for IE8/Firefox. script type=text/javascript !-- setTimeout(history.back(),5000); -- /script Hence in table.jsp and snapshot.jsp, it should be modified as above. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.
[ https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-9850: -- Attachment: HBASE-9850-0.94.14.patch Patch for 0.94 version Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox. - Key: HBASE-9850 URL: https://issues.apache.org/jira/browse/HBASE-9850 Project: HBase Issue Type: Bug Components: UI Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9850-0.94.14.patch, HBASE-9850-trunk.patch, HBASE-9850.patch Steps: 1. create table with regions. 2. insert some amount of data in such a way that make some Hfiles which is lessthan min.compacted files size (say 3 hfiles are there but min compaction files 10) 3. from ui perform compact operation on the table. TABLE ACTION REQUEST Accepted page is displayed 4. operation is failing becoz compaction criteria is not meeting. but from ui compaction requests are continously sending to server. This happens using IE(history.back() seems to resend compact/split request). Firefox seems Ok in this case. 5. Later no automatic redirection to main hamster page occurs. SOLUTION: table.jsp in hbase master. The meta tag for HTML contains refresh with javascript:history.back(). A javascript cannot execute in the meta refresh tag like above in table.jsp and snapshot.jsp meta http-equiv=refresh content=5,javascript:history.back() / This will FAIL. W3 school also suggests to use refresh in JAVAscript rather than meta tag. If above meta is replaced as below, the behavior is OK verified for IE8/Firefox. script type=text/javascript !-- setTimeout(history.back(),5000); -- /script Hence in table.jsp and snapshot.jsp, it should be modified as above. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox.
[ https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kashif J S updated HBASE-9850: -- Attachment: HBASE-9850-trunk.patch Patch for trunk Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox. - Key: HBASE-9850 URL: https://issues.apache.org/jira/browse/HBASE-9850 Project: HBase Issue Type: Bug Components: UI Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9850-0.94.14.patch, HBASE-9850-trunk.patch, HBASE-9850.patch Steps: 1. create table with regions. 2. insert some amount of data in such a way that make some Hfiles which is lessthan min.compacted files size (say 3 hfiles are there but min compaction files 10) 3. from ui perform compact operation on the table. TABLE ACTION REQUEST Accepted page is displayed 4. operation is failing becoz compaction criteria is not meeting. but from ui compaction requests are continously sending to server. This happens using IE(history.back() seems to resend compact/split request). Firefox seems Ok in this case. 5. Later no automatic redirection to main hamster page occurs. SOLUTION: table.jsp in hbase master. The meta tag for HTML contains refresh with javascript:history.back(). A javascript cannot execute in the meta refresh tag like above in table.jsp and snapshot.jsp meta http-equiv=refresh content=5,javascript:history.back() / This will FAIL. W3 school also suggests to use refresh in JAVAscript rather than meta tag. If above meta is replaced as below, the behavior is OK verified for IE8/Firefox. script type=text/javascript !-- setTimeout(history.back(),5000); -- /script Hence in table.jsp and snapshot.jsp, it should be modified as above. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9920) Lower OK_FINDBUGS_WARNINGS in test-patch.properties
[ https://issues.apache.org/jira/browse/HBASE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9920: -- Fix Version/s: 0.98.0 Integrated to trunk. Thanks for the review, Nicolas. Lower OK_FINDBUGS_WARNINGS in test-patch.properties --- Key: HBASE-9920 URL: https://issues.apache.org/jira/browse/HBASE-9920 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9920.txt HBASE-9903 removed generated classes from findbugs checking. OK_FINDBUGS_WARNINGS in test-patch.properties should be lowered. According to https://builds.apache.org/job/PreCommit-HBASE-Build/7776/artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html , there were: 3 warnings for org.apache.hadoop.hbase.generated classes 19 warnings for org.apache.hadoop.hbase.tmpl classes -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817320#comment-13817320 ] Ted Yu commented on HBASE-9808: --- Test failure is not related. Will wait one day in case Jon has more comments. org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation Key: HBASE-9808 URL: https://issues.apache.org/jira/browse/HBASE-9808 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Gustavo Anatoly Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, HBASE-9808-v3.patch, HBASE-9808.patch Here is list of JIRAs whose fixes might have gone into rest.PerformanceEvaluation : {code} r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line HBASE-9663 PerformanceEvaluation does not properly honor specified table name parameter r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line HBASE-9662 PerformanceEvaluation input do not handle tags properties r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line HBASE-9558 PerformanceEvaluation is in hbase-server, and creates a dependency to MiniDFSCluster r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line HBASE-9521 clean clearBufferOnFail behavior and deprecate it r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines HBASE-9330 Refactor PE to create HTable the correct way {code} Long term, we may consider consolidating the two PerformanceEvaluation classes so that such maintenance work can be reduced. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9850) Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox
[ https://issues.apache.org/jira/browse/HBASE-9850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817350#comment-13817350 ] Hadoop QA commented on HBASE-9850: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612823/HBASE-9850-trunk.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7797//console This message is automatically generated. Issues with UI for table compact/split operation completion. After split/compaction operation using UI, the page is not automatically redirecting back using IE8/Firefox. - Key: HBASE-9850 URL: https://issues.apache.org/jira/browse/HBASE-9850 Project: HBase Issue Type: Bug Components: UI Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9850-0.94.14.patch, HBASE-9850-trunk.patch, HBASE-9850.patch Steps: 1. create table with regions. 2. insert some amount of data in such a way that make some Hfiles which is lessthan min.compacted files size (say 3 hfiles are there but min compaction files 10) 3. from ui perform compact operation on the table. TABLE ACTION REQUEST Accepted page is displayed 4. operation is failing becoz compaction criteria is not meeting. but from ui compaction requests are continously sending to server. This happens using IE(history.back() seems to resend compact/split request). Firefox seems Ok in this case. 5. Later no automatic redirection to main hamster page occurs. SOLUTION: table.jsp in hbase master. The meta tag for HTML contains refresh with javascript:history.back(). A javascript cannot execute in the meta refresh tag like above in table.jsp and snapshot.jsp meta http-equiv=refresh content=5,javascript:history.back() / This will FAIL. W3 school also suggests to use refresh in
[jira] [Commented] (HBASE-6461) Killing the HRegionServer and DataNode hosting ROOT can result in a malformed root table.
[ https://issues.apache.org/jira/browse/HBASE-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817363#comment-13817363 ] Jean-Marc Spaggiari commented on HBASE-6461: No-one look at this issue? Killing the HRegionServer and DataNode hosting ROOT can result in a malformed root table. - Key: HBASE-6461 URL: https://issues.apache.org/jira/browse/HBASE-6461 Project: HBase Issue Type: Bug Environment: hadoop-0.20.2-cdh3u3 HBase 0.94.1 RC1 Reporter: Elliott Clark Priority: Critical Spun up a new dfs on hadoop-0.20.2-cdh3u3 Started hbase started running loadtest tool. killed rs and dn holding root with killall -9 java on server sv4r27s44 at about 2012-07-25 22:40:00 After things stabilize Root is in a bad state. Ran hbck and got: Exception in thread main org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in -ROOT- for region .META.,,1.1028785192 containing row at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1016) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:810) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:232) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:172) at org.apache.hadoop.hbase.util.HBaseFsck.connect(HBaseFsck.java:241) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3236) hbase(main):001:0 scan '-ROOT-' ROW COLUMN+CELL 12/07/25 22:43:18 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing. .META.,,1column=info:regioninfo, timestamp=1343255838525, value={NAME = '.META.,,1', STARTKEY = '', ENDKEY = '', ENCODED = 1028785192,} .META.,,1column=info:v, timestamp=1343255838525, value=\x00\x00 1 row(s) in 0.5930 seconds Here's the master log: https://gist.github.com/3179194 I tried the same thing with 0.92.1 and I was able to get into a similar situation, so I don't think this is anything new. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9906) Restore snapshot fails to restore the meta edits sporadically
[ https://issues.apache.org/jira/browse/HBASE-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817380#comment-13817380 ] Hudson commented on HBASE-9906: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #116 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/116/]) HBASE-9906 Restore snapshot fails to restore the meta edits sporadically (enis: rev 1539907) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/RestoreSnapshotHandler.java Restore snapshot fails to restore the meta edits sporadically --- Key: HBASE-9906 URL: https://issues.apache.org/jira/browse/HBASE-9906 Project: HBase Issue Type: Bug Components: snapshots Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: hbase-9906-0.94_v1.patch, hbase-9906_v1.patch After snaphot restore, we see failures to find the table in meta: {code} disable 'tablefour' restore_snapshot 'snapshot_tablefour' enable 'tablefour' ERROR: Table tablefour does not exist.' {code} This is quite subtle. From the looks of it, we successfully restore the snapshot, do the meta updates, return to the client about the status. The client then tries to do an operation for the table (like enable table, or scan in the test outputs) which fails because the meta entry for the region seems to be gone (in case of single region, the table will be reported missing). Subsequent attempts for creating the table will also fail because the table directories will be there, but not the meta entries. For restoring meta entries, we are doing a delete then a put to the same region: {code} 2013-11-04 10:39:51,582 INFO org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to restore: 76d0e2b7ec3291afcaa82e18a56ccc30 2013-11-04 10:39:51,582 INFO org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to remove: fa41edf43fe3ee131db4a34b848ff432 ... 2013-11-04 10:39:52,102 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Deleted [{ENCODED = fa41edf43fe3ee131db4a34b848ff432, NAME = 'tablethree_mod,,1383559723345.fa41edf43fe3ee131db4a34b848ff432.', STARTKEY = '', ENDKEY = ''}, {ENCODED = 76d0e2b7ec3291afcaa82e18a56ccc30, NAME = 'tablethree_mod,,1383561123097.76d0e2b7ec3291afcaa82e18a56ccc30.', STARTKE 2013-11-04 10:39:52,111 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Added 1 {code} The root cause for this sporadic failure is that, the delete and subsequent put will have the same timestamp if they execute in the same ms. The delete will override the put in the same ts, even though the put have a larger ts. See: HBASE-9905, HBASE-8770 Credit goes to [~huned] for reporting this bug. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9900) Fix unintended byte[].toString in AccessController
[ https://issues.apache.org/jira/browse/HBASE-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817382#comment-13817382 ] Hudson commented on HBASE-9900: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #116 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/116/]) HBASE-9900. Fix unintended byte[].toString in AccessController (apurtell: rev 1539883) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/TableAuthManager.java Fix unintended byte[].toString in AccessController -- Key: HBASE-9900 URL: https://issues.apache.org/jira/browse/HBASE-9900 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.1 Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.98.0, 0.96.1 Attachments: 9900.patch Found while running FindBugs for another change. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9915) Performance: isSeeked() in EncodedScannerV2 always returns false
[ https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817381#comment-13817381 ] Hudson commented on HBASE-9915: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #116 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/116/]) HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false (larsh: rev 1539934) * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java Performance: isSeeked() in EncodedScannerV2 always returns false Key: HBASE-9915 URL: https://issues.apache.org/jira/browse/HBASE-9915 Project: HBase Issue Type: Bug Components: Scanners Reporter: Lars Hofhansl Assignee: Lars Hofhansl Labels: performance Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: 9915-0.94.txt, 9915-trunk-v2.txt, 9915-trunk-v2.txt, 9915-trunk.txt, profile.png While debugging why reseek is so slow I found that it is quite broken for encoded scanners. The problem is this: AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was seeked or not. If it was it checks whether the KV we want to seek to is in the current block, if not it always consults the index blocks again. isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 and thus always returns false, which in turns causes an index lookup for each reseek. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-6461) Killing the HRegionServer and DataNode hosting ROOT can result in a malformed root table.
[ https://issues.apache.org/jira/browse/HBASE-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-6461: - Affects Version/s: 0.94.14 Putting against next 0.94 so it gets some attention if only to have it punted again. Killing the HRegionServer and DataNode hosting ROOT can result in a malformed root table. - Key: HBASE-6461 URL: https://issues.apache.org/jira/browse/HBASE-6461 Project: HBase Issue Type: Bug Affects Versions: 0.94.14 Environment: hadoop-0.20.2-cdh3u3 HBase 0.94.1 RC1 Reporter: Elliott Clark Priority: Critical Spun up a new dfs on hadoop-0.20.2-cdh3u3 Started hbase started running loadtest tool. killed rs and dn holding root with killall -9 java on server sv4r27s44 at about 2012-07-25 22:40:00 After things stabilize Root is in a bad state. Ran hbck and got: Exception in thread main org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in -ROOT- for region .META.,,1.1028785192 containing row at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1016) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:841) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:810) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:232) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:172) at org.apache.hadoop.hbase.util.HBaseFsck.connect(HBaseFsck.java:241) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3236) hbase(main):001:0 scan '-ROOT-' ROW COLUMN+CELL 12/07/25 22:43:18 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing. .META.,,1column=info:regioninfo, timestamp=1343255838525, value={NAME = '.META.,,1', STARTKEY = '', ENDKEY = '', ENCODED = 1028785192,} .META.,,1column=info:v, timestamp=1343255838525, value=\x00\x00 1 row(s) in 0.5930 seconds Here's the master log: https://gist.github.com/3179194 I tried the same thing with 0.92.1 and I was able to get into a similar situation, so I don't think this is anything new. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9917) Fix it so Default Connection Pool does not spin up max threads even when not needed
[ https://issues.apache.org/jira/browse/HBASE-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817396#comment-13817396 ] stack commented on HBASE-9917: -- Any chance of a review here? Previous we had hard-pegged 256 threads just sitting there idle. Now we have only threads that are doing work or those waiting around a while (keepalive 10 seonds) to see if more work about to come (else they die). Fix it so Default Connection Pool does not spin up max threads even when not needed --- Key: HBASE-9917 URL: https://issues.apache.org/jira/browse/HBASE-9917 Project: HBase Issue Type: Sub-task Components: Client Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9917.txt, pool.txt Testing, I noticed that if we use the HConnection executor service as opposed to the executor service that is created when you create an HTable without passing in a connection: i.e HConnectionManager.createConnection(config).getTable(tableName) vs HTable(config, tableName) ... then we will spin up the max 256 threads and they will just hang out though not being used. We are encouraging HConnection#getTable over new HTable so worth fixing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817405#comment-13817405 ] stack commented on HBASE-9117: -- [~nkeywal] Do the createConnections have a corresponding close? I suppose this a pretty radical change making it so every time we create a scanner, even if a short scanner, we make a new connection; that is more heavyweight? I love the cleanup. We going to break clients? I suppose we've deprecated for a long time now AND what we have is really messy to make sense of. Remove HTablePool and all HConnection pooling related APIs -- Key: HBASE-9117 URL: https://issues.apache.org/jira/browse/HBASE-9117 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.98.0 Attachments: HBASE-9117.00.patch The recommended way is now: # Create an HConnection: HConnectionManager.createConnection(...) # Create a light HTable: HConnection.getTable(...) # table.close() # connection.close() All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors
[ https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9907: - Attachment: 9907v3.txt See if hadoopqa has any attention for me Rig to fake a cluster so can profile client behaviors - Key: HBASE-9907 URL: https://issues.apache.org/jira/browse/HBASE-9907 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.96.0 Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt Patch carried over from HBASE-9775 parent issue. Adds to the TestClientNoCluster#main a rig that allows faking many clients against a few servers and the opposite. Useful for studying client operation. Includes a few changes to pb makings to try and save on a few creations. Also has an edit of javadoc on how to create an HConnection and HTable trying to be more forceful about pointing you in right direction ([~lhofhansl] -- mind reviewing these javadoc changes?) I have a +1 already on this patch up in parent issue. Will run by hadoopqa to make sure all good before commit. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors
[ https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9907: - Status: Open (was: Patch Available) Rig to fake a cluster so can profile client behaviors - Key: HBASE-9907 URL: https://issues.apache.org/jira/browse/HBASE-9907 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.96.0 Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt Patch carried over from HBASE-9775 parent issue. Adds to the TestClientNoCluster#main a rig that allows faking many clients against a few servers and the opposite. Useful for studying client operation. Includes a few changes to pb makings to try and save on a few creations. Also has an edit of javadoc on how to create an HConnection and HTable trying to be more forceful about pointing you in right direction ([~lhofhansl] -- mind reviewing these javadoc changes?) I have a +1 already on this patch up in parent issue. Will run by hadoopqa to make sure all good before commit. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9907) Rig to fake a cluster so can profile client behaviors
[ https://issues.apache.org/jira/browse/HBASE-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9907: - Status: Patch Available (was: Open) Rig to fake a cluster so can profile client behaviors - Key: HBASE-9907 URL: https://issues.apache.org/jira/browse/HBASE-9907 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.96.0 Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9907.txt, 9907.txt, 9907v2.txt, 9907v3.txt, 9907v3.txt Patch carried over from HBASE-9775 parent issue. Adds to the TestClientNoCluster#main a rig that allows faking many clients against a few servers and the opposite. Useful for studying client operation. Includes a few changes to pb makings to try and save on a few creations. Also has an edit of javadoc on how to create an HConnection and HTable trying to be more forceful about pointing you in right direction ([~lhofhansl] -- mind reviewing these javadoc changes?) I have a +1 already on this patch up in parent issue. Will run by hadoopqa to make sure all good before commit. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9921) stripe compaction - findbugs and javadoc issues, some improvements
[ https://issues.apache.org/jira/browse/HBASE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817469#comment-13817469 ] Ted Yu commented on HBASE-9921: --- +1 stripe compaction - findbugs and javadoc issues, some improvements -- Key: HBASE-9921 URL: https://issues.apache.org/jira/browse/HBASE-9921 Project: HBase Issue Type: Task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-9921.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9915) Performance: isSeeked() in EncodedScannerV2 always returns false
[ https://issues.apache.org/jira/browse/HBASE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817484#comment-13817484 ] Hudson commented on HBASE-9915: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #832 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/832/]) HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false (larsh: rev 1539933) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java Performance: isSeeked() in EncodedScannerV2 always returns false Key: HBASE-9915 URL: https://issues.apache.org/jira/browse/HBASE-9915 Project: HBase Issue Type: Bug Components: Scanners Reporter: Lars Hofhansl Assignee: Lars Hofhansl Labels: performance Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: 9915-0.94.txt, 9915-trunk-v2.txt, 9915-trunk-v2.txt, 9915-trunk.txt, profile.png While debugging why reseek is so slow I found that it is quite broken for encoded scanners. The problem is this: AbstractScannerV2.reseekTo(...) calls isSeeked() to check whether scanner was seeked or not. If it was it checks whether the KV we want to seek to is in the current block, if not it always consults the index blocks again. isSeeked checks the blockBuffer member, which is not used by EncodedScannerV2 and thus always returns false, which in turns causes an index lookup for each reseek. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9902) Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time
[ https://issues.apache.org/jira/browse/HBASE-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817488#comment-13817488 ] Ted Yu commented on HBASE-9902: --- +1 Patch is needed for 0.94 Region Server is starting normally even if clock skew is more than default 30 seconds(or any configured). - Regionserver node time is greater than master node time Key: HBASE-9902 URL: https://issues.apache.org/jira/browse/HBASE-9902 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.94.11 Reporter: Kashif J S Assignee: Kashif J S Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: HBASE-9902.patch, HBASE-9902_v2.patch When Region server's time is ahead of Master's time and the difference is more than hbase.master.maxclockskew value, region server startup is not failing with ClockOutOfSyncException. This causes some abnormal behavior as detected by our Tests. ServerManager.java#checkClockSkew long skew = System.currentTimeMillis() - serverCurrentTime; if (skew maxSkew) { String message = Server + serverName + has been + rejected; Reported time is too far out of sync with master. + Time difference of + skew + ms max allowed of + maxSkew + ms; LOG.warn(message); throw new ClockOutOfSyncException(message); } Above line results in negative value when Master's time is lesser than region server time and if (skew maxSkew) check fails to find the skew in this case. Please Note: This was tested in hbase 0.94.11 version and the trunk also currently has the same logic. The fix for the same would be to make the skew positive value first as below: long skew = System.currentTimeMillis() - serverCurrentTime; skew = (skew 0 ? -skew : skew); if (skew maxSkew) {. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9900) Fix unintended byte[].toString in AccessController
[ https://issues.apache.org/jira/browse/HBASE-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817485#comment-13817485 ] Hudson commented on HBASE-9900: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #832 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/832/]) HBASE-9900. Fix unintended byte[].toString in AccessController (apurtell: rev 1539882) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/TableAuthManager.java Fix unintended byte[].toString in AccessController -- Key: HBASE-9900 URL: https://issues.apache.org/jira/browse/HBASE-9900 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.1 Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.98.0, 0.96.1 Attachments: 9900.patch Found while running FindBugs for another change. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-4654) [replication] Add a check to make sure we don't replicate to ourselves
[ https://issues.apache.org/jira/browse/HBASE-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817489#comment-13817489 ] Demai Ni commented on HBASE-4654: - hi, folks, I encountered the same problem last week. email@dev list :http://mail-archives.apache.org/mod_mbox/hbase-dev/201310.mbox/%3CCAOEq2C5g7-8MfUBSdzeTgzNFJU6pkP3cMY_62N18z3pRXe2SMw%40mail.gmail.com%3E My case was (on hbase 0.94.9) that a zoo.cfg was put under ./hbase/conf. I was thinking about to open a jira for the sanity check (exactly the same idea), and glad to find this jira before opening a dup. Just curious that why this jira hasn't been pushed into trunk and 0.94 yet since no strong objection in the comments. Well, understand that this is a rare case, but a couple line of code can save someone(like me) several hours of debugging, which sounds a good idea. The jira is still unassigned. I can do some testing and upload an up-to-date patch (both trunk and 0.94) if everyone is busy. thanks P.S. per the comments about checking 'addPeer'. I tried it(add_peer with zookeeper info of the master cluster), which won't cause the UUID case in replicateSource. So far, the only case I am aware of is the one reported with incorrect zoo.cfg Demai [replication] Add a check to make sure we don't replicate to ourselves -- Key: HBASE-4654 URL: https://issues.apache.org/jira/browse/HBASE-4654 Project: HBase Issue Type: Improvement Affects Versions: 0.90.4 Reporter: Jean-Daniel Cryans Fix For: 0.92.3 Attachments: 4654-trunk.txt It's currently possible to add a peer for replication and point it to the local cluster, which I believe could very well happen for those like us that use only one ZK ensemble per DC so that only the root znode changes when you want to set up replication intra-DC. I don't think comparing just the cluster ID would be enough because you would normally use a different one for another cluster and nothing will block you from pointing elsewhere. Comparing the ZK ensemble address doesn't work either when you have multiple DNS entries that point at the same place. I think this could be resolved by looking up the master address in the relevant znode as it should be exactly the same thing in the case where you have the same cluster. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9920) Lower OK_FINDBUGS_WARNINGS in test-patch.properties
[ https://issues.apache.org/jira/browse/HBASE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817482#comment-13817482 ] Hudson commented on HBASE-9920: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #832 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/832/]) HBASE-9920 Lower OK_FINDBUGS_WARNINGS in test-patch.properties (tedyu: rev 1540059) * /hbase/trunk/dev-support/test-patch.properties Lower OK_FINDBUGS_WARNINGS in test-patch.properties --- Key: HBASE-9920 URL: https://issues.apache.org/jira/browse/HBASE-9920 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9920.txt HBASE-9903 removed generated classes from findbugs checking. OK_FINDBUGS_WARNINGS in test-patch.properties should be lowered. According to https://builds.apache.org/job/PreCommit-HBASE-Build/7776/artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html , there were: 3 warnings for org.apache.hadoop.hbase.generated classes 19 warnings for org.apache.hadoop.hbase.tmpl classes -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9906) Restore snapshot fails to restore the meta edits sporadically
[ https://issues.apache.org/jira/browse/HBASE-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817483#comment-13817483 ] Hudson commented on HBASE-9906: --- FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #832 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/832/]) HBASE-9906 Restore snapshot fails to restore the meta edits sporadically (enis: rev 1539906) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/RestoreSnapshotHandler.java Restore snapshot fails to restore the meta edits sporadically --- Key: HBASE-9906 URL: https://issues.apache.org/jira/browse/HBASE-9906 Project: HBase Issue Type: Bug Components: snapshots Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1, 0.94.14 Attachments: hbase-9906-0.94_v1.patch, hbase-9906_v1.patch After snaphot restore, we see failures to find the table in meta: {code} disable 'tablefour' restore_snapshot 'snapshot_tablefour' enable 'tablefour' ERROR: Table tablefour does not exist.' {code} This is quite subtle. From the looks of it, we successfully restore the snapshot, do the meta updates, return to the client about the status. The client then tries to do an operation for the table (like enable table, or scan in the test outputs) which fails because the meta entry for the region seems to be gone (in case of single region, the table will be reported missing). Subsequent attempts for creating the table will also fail because the table directories will be there, but not the meta entries. For restoring meta entries, we are doing a delete then a put to the same region: {code} 2013-11-04 10:39:51,582 INFO org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to restore: 76d0e2b7ec3291afcaa82e18a56ccc30 2013-11-04 10:39:51,582 INFO org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper: region to remove: fa41edf43fe3ee131db4a34b848ff432 ... 2013-11-04 10:39:52,102 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Deleted [{ENCODED = fa41edf43fe3ee131db4a34b848ff432, NAME = 'tablethree_mod,,1383559723345.fa41edf43fe3ee131db4a34b848ff432.', STARTKEY = '', ENDKEY = ''}, {ENCODED = 76d0e2b7ec3291afcaa82e18a56ccc30, NAME = 'tablethree_mod,,1383561123097.76d0e2b7ec3291afcaa82e18a56ccc30.', STARTKE 2013-11-04 10:39:52,111 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Added 1 {code} The root cause for this sporadic failure is that, the delete and subsequent put will have the same timestamp if they execute in the same ms. The delete will override the put in the same ts, even though the put have a larger ts. See: HBASE-9905, HBASE-8770 Credit goes to [~huned] for reporting this bug. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (HBASE-6928) TestStoreFile sometimes fails with 'Column family prefix used twice'
[ https://issues.apache.org/jira/browse/HBASE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-6928. --- Resolution: Cannot Reproduce TestStoreFile sometimes fails with 'Column family prefix used twice' Key: HBASE-6928 URL: https://issues.apache.org/jira/browse/HBASE-6928 Project: HBase Issue Type: Bug Reporter: Ted Yu Attachments: 6928-debug.txt, 6928_attempted_fix.txt In build #3406, I saw: {code} java.lang.AssertionError: Column family prefix used twice: cf.cf.bt.Data.fsReadnumops at org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.validateMetricChanges(SchemaMetrics.java:822) at org.apache.hadoop.hbase.regionserver.TestStoreFile.tearDown(TestStoreFile.java:89) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (HBASE-6893) TestRegionRebalancing#testRebalanceOnRegionServerNumberChange occasionally fails
[ https://issues.apache.org/jira/browse/HBASE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-6893. --- Resolution: Cannot Reproduce TestRegionRebalancing#testRebalanceOnRegionServerNumberChange occasionally fails Key: HBASE-6893 URL: https://issues.apache.org/jira/browse/HBASE-6893 Project: HBase Issue Type: Bug Reporter: Ted Yu In trunk build #3387 (https://builds.apache.org/view/G-L/view/HBase/job/HBase-TRUNK/3387/testReport/org.apache.hadoop.hbase/TestRegionRebalancing/testRebalanceOnRegionServerNumberChange_0_/): {code} java.lang.AssertionError: After 5 attempts, region assignments were not balanced. at org.junit.Assert.fail(Assert.java:93) at org.apache.hadoop.hbase.TestRegionRebalancing.assertRegionsAreBalanced(TestRegionRebalancing.java:219) at org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange(TestRegionRebalancing.java:139) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9917) Fix it so Default Connection Pool does not spin up max threads even when not needed
[ https://issues.apache.org/jira/browse/HBASE-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817517#comment-13817517 ] Nicolas Liochon commented on HBASE-9917: Skimmed through the patch, seems ok. +1. Fix it so Default Connection Pool does not spin up max threads even when not needed --- Key: HBASE-9917 URL: https://issues.apache.org/jira/browse/HBASE-9917 Project: HBase Issue Type: Sub-task Components: Client Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9917.txt, pool.txt Testing, I noticed that if we use the HConnection executor service as opposed to the executor service that is created when you create an HTable without passing in a connection: i.e HConnectionManager.createConnection(config).getTable(tableName) vs HTable(config, tableName) ... then we will spin up the max 256 threads and they will just hang out though not being used. We are encouraging HConnection#getTable over new HTable so worth fixing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HBASE-9926) Scanner doesn't check if a region is available
Jimmy Xiang created HBASE-9926: -- Summary: Scanner doesn't check if a region is available Key: HBASE-9926 URL: https://issues.apache.org/jira/browse/HBASE-9926 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Currently the scanner doesn't check if a region is closing/closed. If a region is closed, then reopened, an old scanner could still refer to the closed HRegion instance. So the scanner will miss some store file changes due to compaction. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HBASE-9925) Don't close a file if doesn't EOF while replicating
Himanshu Vashishtha created HBASE-9925: -- Summary: Don't close a file if doesn't EOF while replicating Key: HBASE-9925 URL: https://issues.apache.org/jira/browse/HBASE-9925 Project: HBase Issue Type: Bug Affects Versions: 0.96.0, 0.98.0 Reporter: Himanshu Vashishtha While doing replication, we open and close the WAL file _every_ time we read entries to send. We could open/close the reader only when we hit EOF. That would alleviate some NN load, especially on a write heavy cluster. This came while discussing our current open/close heuristic in replication with [~jdcryans]. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9918) MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery
[ https://issues.apache.org/jira/browse/HBASE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey Zhong updated HBASE-9918: - Status: Patch Available (was: Open) MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery --- Key: HBASE-9918 URL: https://issues.apache.org/jira/browse/HBASE-9918 Project: HBase Issue Type: Bug Reporter: Jeffrey Zhong Attachments: HBase-9918.patch TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry always failed at the following verification for me in my dev env(you have to run the single test not the whole TestZooKeeper suite to reproduce) {code} assertEquals(Number of rows should be equal to number of puts., numberOfPuts, numberOfRows); {code} We missed two ZK listeners after master recovery MasterAddressTracker ZKNamespaceManager. My current patch is to fix the JIRA issue while I'm wondering if we should totally remove the master failover implementation when ZK session expired because this causes reinitialize HMaster partially which is error prone and not a clean state to start from. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HBASE-9918) MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery
[ https://issues.apache.org/jira/browse/HBASE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey Zhong reassigned HBASE-9918: Assignee: Jeffrey Zhong MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery --- Key: HBASE-9918 URL: https://issues.apache.org/jira/browse/HBASE-9918 Project: HBase Issue Type: Bug Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: HBase-9918.patch TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry always failed at the following verification for me in my dev env(you have to run the single test not the whole TestZooKeeper suite to reproduce) {code} assertEquals(Number of rows should be equal to number of puts., numberOfPuts, numberOfRows); {code} We missed two ZK listeners after master recovery MasterAddressTracker ZKNamespaceManager. My current patch is to fix the JIRA issue while I'm wondering if we should totally remove the master failover implementation when ZK session expired because this causes reinitialize HMaster partially which is error prone and not a clean state to start from. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9922) Need to delete a row based on the column namevalue (not the row key)... please provide the delete query for the same...
[ https://issues.apache.org/jira/browse/HBASE-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817548#comment-13817548 ] Vladimir Rodionov commented on HBASE-9922: -- He is very persistent. Need to delete a row based on the column namevalue (not the row key)... please provide the delete query for the same... Key: HBASE-9922 URL: https://issues.apache.org/jira/browse/HBASE-9922 Project: HBase Issue Type: Bug Reporter: ranjini -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9917) Fix it so Default Connection Pool does not spin up max threads even when not needed
[ https://issues.apache.org/jira/browse/HBASE-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817550#comment-13817550 ] Elliott Clark commented on HBASE-9917: -- +1 lgtm Fix it so Default Connection Pool does not spin up max threads even when not needed --- Key: HBASE-9917 URL: https://issues.apache.org/jira/browse/HBASE-9917 Project: HBase Issue Type: Sub-task Components: Client Reporter: stack Assignee: stack Fix For: 0.98.0, 0.96.1 Attachments: 9917.txt, pool.txt Testing, I noticed that if we use the HConnection executor service as opposed to the executor service that is created when you create an HTable without passing in a connection: i.e HConnectionManager.createConnection(config).getTable(tableName) vs HTable(config, tableName) ... then we will spin up the max 256 threads and they will just hang out though not being used. We are encouraging HConnection#getTable over new HTable so worth fixing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9895) 0.96 Import utility can't import an exported file from 0.94
[ https://issues.apache.org/jira/browse/HBASE-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817571#comment-13817571 ] Nick Dimiduk commented on HBASE-9895: - This looks like magic to me; where is setConf(Configuration) called on the Serialization instance? When will getConf() be non-null? 0.96 Import utility can't import an exported file from 0.94 --- Key: HBASE-9895 URL: https://issues.apache.org/jira/browse/HBASE-9895 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.96.0 Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: hbase-9895.patch Basically we PBed org.apache.hadoop.hbase.client.Result so a 0.96 cluster cannot import 0.94 exported files. This issue is annoying because a user can't import his old archive files after upgrade or archives from others who are using 0.94. The ideal way is to catch deserialization error and then fall back to 0.94 format for importing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HBASE-9916) Fix javadoc warning in StoreFileManager.java
[ https://issues.apache.org/jira/browse/HBASE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HBASE-9916: --- Assignee: Sergey Shelukhin Fix javadoc warning in StoreFileManager.java Key: HBASE-9916 URL: https://issues.apache.org/jira/browse/HBASE-9916 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Sergey Shelukhin Priority: Minor From https://builds.apache.org/job/PreCommit-HBASE-Build/7779/artifact/trunk/patchprocess/patchJavadocWarnings.txt : {code} [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java:53: warning - @param argument sf is not a parameter name. {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (HBASE-9916) Fix javadoc warning in StoreFileManager.java
[ https://issues.apache.org/jira/browse/HBASE-9916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HBASE-9916. - Resolution: Duplicate Fix javadoc warning in StoreFileManager.java Key: HBASE-9916 URL: https://issues.apache.org/jira/browse/HBASE-9916 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Sergey Shelukhin Priority: Minor From https://builds.apache.org/job/PreCommit-HBASE-Build/7779/artifact/trunk/patchprocess/patchJavadocWarnings.txt : {code} [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileManager.java:53: warning - @param argument sf is not a parameter name. {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9895) 0.96 Import utility can't import an exported file from 0.94
[ https://issues.apache.org/jira/browse/HBASE-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817580#comment-13817580 ] Jeffrey Zhong commented on HBASE-9895: -- class ResultSerialization is extended from Configured. Therefore, when mapreduce initializes those class and configuration will be passed to the new instance automatically(magically). 0.96 Import utility can't import an exported file from 0.94 --- Key: HBASE-9895 URL: https://issues.apache.org/jira/browse/HBASE-9895 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.96.0 Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: hbase-9895.patch Basically we PBed org.apache.hadoop.hbase.client.Result so a 0.96 cluster cannot import 0.94 exported files. This issue is annoying because a user can't import his old archive files after upgrade or archives from others who are using 0.94. The ideal way is to catch deserialization error and then fall back to 0.94 format for importing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9926) Scanner doesn't check if a region is available
[ https://issues.apache.org/jira/browse/HBASE-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9926: --- Attachment: trunk-9926.patch Scanner doesn't check if a region is available -- Key: HBASE-9926 URL: https://issues.apache.org/jira/browse/HBASE-9926 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Attachments: trunk-9926.patch Currently the scanner doesn't check if a region is closing/closed. If a region is closed, then reopened, an old scanner could still refer to the closed HRegion instance. So the scanner will miss some store file changes due to compaction. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9926) Scanner doesn't check if a region is available
[ https://issues.apache.org/jira/browse/HBASE-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9926: --- Status: Patch Available (was: Open) Attached a simple patch. Most of the changes shown in the diff are related to a try-block removal. Scanner doesn't check if a region is available -- Key: HBASE-9926 URL: https://issues.apache.org/jira/browse/HBASE-9926 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Attachments: trunk-9926.patch Currently the scanner doesn't check if a region is closing/closed. If a region is closed, then reopened, an old scanner could still refer to the closed HRegion instance. So the scanner will miss some store file changes due to compaction. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817587#comment-13817587 ] Gustavo Anatoly commented on HBASE-9808: Okay, Ted. Do you think, that patch it's completed? org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation Key: HBASE-9808 URL: https://issues.apache.org/jira/browse/HBASE-9808 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Gustavo Anatoly Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, HBASE-9808-v3.patch, HBASE-9808.patch Here is list of JIRAs whose fixes might have gone into rest.PerformanceEvaluation : {code} r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line HBASE-9663 PerformanceEvaluation does not properly honor specified table name parameter r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line HBASE-9662 PerformanceEvaluation input do not handle tags properties r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line HBASE-9558 PerformanceEvaluation is in hbase-server, and creates a dependency to MiniDFSCluster r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line HBASE-9521 clean clearBufferOnFail behavior and deprecate it r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines HBASE-9330 Refactor PE to create HTable the correct way {code} Long term, we may consider consolidating the two PerformanceEvaluation classes so that such maintenance work can be reduced. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817588#comment-13817588 ] Ted Yu commented on HBASE-9808: --- I think so. org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation Key: HBASE-9808 URL: https://issues.apache.org/jira/browse/HBASE-9808 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Gustavo Anatoly Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, HBASE-9808-v3.patch, HBASE-9808.patch Here is list of JIRAs whose fixes might have gone into rest.PerformanceEvaluation : {code} r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line HBASE-9663 PerformanceEvaluation does not properly honor specified table name parameter r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line HBASE-9662 PerformanceEvaluation input do not handle tags properties r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line HBASE-9558 PerformanceEvaluation is in hbase-server, and creates a dependency to MiniDFSCluster r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line HBASE-9521 clean clearBufferOnFail behavior and deprecate it r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines HBASE-9330 Refactor PE to create HTable the correct way {code} Long term, we may consider consolidating the two PerformanceEvaluation classes so that such maintenance work can be reduced. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9921) stripe compaction - findbugs and javadoc issues, some improvements
[ https://issues.apache.org/jira/browse/HBASE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817596#comment-13817596 ] Sergey Shelukhin commented on HBASE-9921: - test passes for me, I think it's unrelated. Will commit afternoon-ish stripe compaction - findbugs and javadoc issues, some improvements -- Key: HBASE-9921 URL: https://issues.apache.org/jira/browse/HBASE-9921 Project: HBase Issue Type: Task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-9921.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817599#comment-13817599 ] Lars Hofhansl commented on HBASE-9117: -- Thanks [~ndimiduk], I'll take a look (probably tomorrow, busy with meeting all day today). Remove HTablePool and all HConnection pooling related APIs -- Key: HBASE-9117 URL: https://issues.apache.org/jira/browse/HBASE-9117 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.98.0 Attachments: HBASE-9117.00.patch The recommended way is now: # Create an HConnection: HConnectionManager.createConnection(...) # Create a light HTable: HConnection.getTable(...) # table.close() # connection.close() All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs
[ https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9117: - Assignee: Nick Dimiduk (was: Lars Hofhansl) Remove HTablePool and all HConnection pooling related APIs -- Key: HBASE-9117 URL: https://issues.apache.org/jira/browse/HBASE-9117 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: Nick Dimiduk Fix For: 0.98.0 Attachments: HBASE-9117.00.patch The recommended way is now: # Create an HConnection: HConnectionManager.createConnection(...) # Create a light HTable: HConnection.getTable(...) # table.close() # connection.close() All other API and pooling will be removed. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9895) 0.96 Import utility can't import an exported file from 0.94
[ https://issues.apache.org/jira/browse/HBASE-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817601#comment-13817601 ] Nick Dimiduk commented on HBASE-9895: - Alright then. Any chance of adding some kind of test? Maybe a data blob that matches the old format? Looks good otherwise. 0.96 Import utility can't import an exported file from 0.94 --- Key: HBASE-9895 URL: https://issues.apache.org/jira/browse/HBASE-9895 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.96.0 Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: hbase-9895.patch Basically we PBed org.apache.hadoop.hbase.client.Result so a 0.96 cluster cannot import 0.94 exported files. This issue is annoying because a user can't import his old archive files after upgrade or archives from others who are using 0.94. The ideal way is to catch deserialization error and then fall back to 0.94 format for importing. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9808) org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817605#comment-13817605 ] Gustavo Anatoly commented on HBASE-9808: While this, I will try understand the other issue: [HBASE-9809|https://issues.apache.org/jira/browse/HBASE-9809] org.apache.hadoop.hbase.rest.PerformanceEvaluation is out of sync with org.apache.hadoop.hbase.PerformanceEvaluation Key: HBASE-9808 URL: https://issues.apache.org/jira/browse/HBASE-9808 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Gustavo Anatoly Attachments: HBASE-9808-v1.patch, HBASE-9808-v2.patch, HBASE-9808-v3.patch, HBASE-9808.patch Here is list of JIRAs whose fixes might have gone into rest.PerformanceEvaluation : {code} r1527817 | mbertozzi | 2013-09-30 15:57:44 -0700 (Mon, 30 Sep 2013) | 1 line HBASE-9663 PerformanceEvaluation does not properly honor specified table name parameter r1526452 | mbertozzi | 2013-09-26 04:58:50 -0700 (Thu, 26 Sep 2013) | 1 line HBASE-9662 PerformanceEvaluation input do not handle tags properties r1525269 | ramkrishna | 2013-09-21 11:01:32 -0700 (Sat, 21 Sep 2013) | 3 lines HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) r1524985 | nkeywal | 2013-09-20 06:02:54 -0700 (Fri, 20 Sep 2013) | 1 line HBASE-9558 PerformanceEvaluation is in hbase-server, and creates a dependency to MiniDFSCluster r1523782 | nkeywal | 2013-09-16 13:07:13 -0700 (Mon, 16 Sep 2013) | 1 line HBASE-9521 clean clearBufferOnFail behavior and deprecate it r1518341 | jdcryans | 2013-08-28 12:46:55 -0700 (Wed, 28 Aug 2013) | 2 lines HBASE-9330 Refactor PE to create HTable the correct way {code} Long term, we may consider consolidating the two PerformanceEvaluation classes so that such maintenance work can be reduced. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HBASE-9927) ReplicationLogCleaner#stop() calls HConnectionManager#deleteConnection() unnecessarily
Ted Yu created HBASE-9927: - Summary: ReplicationLogCleaner#stop() calls HConnectionManager#deleteConnection() unnecessarily Key: HBASE-9927 URL: https://issues.apache.org/jira/browse/HBASE-9927 Project: HBase Issue Type: Task Reporter: Ted Yu Priority: Minor When inspecting log, I found the following: {code} 2013-11-08 18:23:48,472 ERROR [M:0;kiyo:42380.oldLogCleaner] client.HConnectionManager(468): Connection not found in the list, can't delete it (connection key=HConnectionKey{properties={hbase.rpc.timeout=6, hbase.zookeeper.property.clientPort=59832, hbase.client.pause=100, zookeeper.znode.parent=/hbase, hbase.client.retries.number=350, hbase.zookeeper.quorum=localhost}, username='zy'}). May be the key was modified? java.lang.Exception at org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:468) at org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:404) at org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.stop(ReplicationLogCleaner.java:141) at org.apache.hadoop.hbase.master.cleaner.CleanerChore.cleanup(CleanerChore.java:276) {code} The call to HConnectionManager#deleteConnection() is not needed. Here is related code which has a comment for this effect: {code} // Not sure why we're deleting a connection that we never acquired or used HConnectionManager.deleteConnection(this.getConf()); {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HBASE-9829) make the compaction logging less confusing
[ https://issues.apache.org/jira/browse/HBASE-9829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HBASE-9829: --- Assignee: Sergey Shelukhin make the compaction logging less confusing -- Key: HBASE-9829 URL: https://issues.apache.org/jira/browse/HBASE-9829 Project: HBase Issue Type: Improvement Components: Compaction Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor 1) One of the most popular question from HBase users has got to be I have scheduled major compactions to run once per week, why are there so many. We need to somehow tell the user, wherever we log that there is a major compaction, whether it's a major compaction because that's what was in the request (from regular major compaction or user request), or was it just promoted because it took all files. Esp. the latter should be clear. 2) small vs large compaction threads and minor vs major compactions is confusing. Maybe the threads can be named short and long compactions. We -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (HBASE-9809) RegionTooBusyException should provide region name which was too busy
[ https://issues.apache.org/jira/browse/HBASE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gustavo Anatoly reassigned HBASE-9809: -- Assignee: Gustavo Anatoly RegionTooBusyException should provide region name which was too busy Key: HBASE-9809 URL: https://issues.apache.org/jira/browse/HBASE-9809 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Gustavo Anatoly Under this thread: http://search-hadoop.com/m/WSfKp1yJOFJ, John showed log from LoadIncrementalHFiles where the following is a snippet: {code} 04:18:07,110 INFO LoadIncrementalHFiles:451 - Trying to load hfile=hdfs://pc08.pool.ifis.uni-luebeck.de:8020/tmp/bulkLoadDirectory/PO_S_rowBufferHFile/Hexa/_tmp/PO_S,9.bottom first=http://purl.org/dc/elements/1.1/title,emulates drylot births^^http://www.w3.org/2001/XMLSchema#string last=http://purl.org/dc/e$ org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions: Sun Oct 20 04:15:50 CEST 2013, org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@4cfdfc98, org.apache.hadoop.hbase.RegionTooBusyException: org.apache.hadoop.hbase.RegionTooBusyException: failed to get a lock in 6ms at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5778) at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5764) at org.apache.hadoop.hbase.regionserver.HRegion.startBulkRegionOperation(HRegion.java:5723) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3534) at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3517) at org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2793) at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) {code} Looking at the above, it is not immediately clear which region was busy. Region name should be included in the exception so that user can correlate with the region server where the problem occurs. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9926) Scanner doesn't check if a region is available
[ https://issues.apache.org/jira/browse/HBASE-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817633#comment-13817633 ] Lars Hofhansl commented on HBASE-9926: -- Looks good. Let's have this in 0.94 as well. Scanner doesn't check if a region is available -- Key: HBASE-9926 URL: https://issues.apache.org/jira/browse/HBASE-9926 Project: HBase Issue Type: Bug Reporter: Jimmy Xiang Assignee: Jimmy Xiang Attachments: trunk-9926.patch Currently the scanner doesn't check if a region is closing/closed. If a region is closed, then reopened, an old scanner could still refer to the closed HRegion instance. So the scanner will miss some store file changes due to compaction. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9775) Client write path perf issues
[ https://issues.apache.org/jira/browse/HBASE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817632#comment-13817632 ] Jean-Marc Spaggiari commented on HBASE-9775: Here are the numbers for a standalone test between 0.96 and 0.94 fresh from the branches. || ||0.94||0.96||Diff|| |org.apache.hadoop.hbase.PerformanceEvaluation$FilteredScanTest|10.67|10.38|97.27%| |org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest|840.20|1013.33|120.61%| |org.apache.hadoop.hbase.PerformanceEvaluation$RandomWriteTest|25041.10|35527.26|141.88%| |org.apache.hadoop.hbase.PerformanceEvaluation$RandomScanWithRange1000Test|2396.20|2629.48|109.74%| |org.apache.hadoop.hbase.PerformanceEvaluation$SequentialReadTest|2925.24|3050.02|104.27%| |org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest|27326.22|40190.30|147.08%| Client write path perf issues - Key: HBASE-9775 URL: https://issues.apache.org/jira/browse/HBASE-9775 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.96.0 Reporter: Elliott Clark Priority: Critical Attachments: 9775.rig.txt, 9775.rig.v2.patch, 9775.rig.v3.patch, Charts Search Cloudera Manager - ITBLL.png, Charts Search Cloudera Manager.png, hbase-9775.patch, job_run.log, short_ycsb.png, ycsb.png, ycsb_insert_94_vs_96.png Testing on larger clusters has not had the desired throughput increases. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9908) [WINDOWS] Fix filesystem / classloader related unit tests
[ https://issues.apache.org/jira/browse/HBASE-9908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-9908: - Resolution: Fixed Release Note: Committed this. Thanks Nick for looking. Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [WINDOWS] Fix filesystem / classloader related unit tests - Key: HBASE-9908 URL: https://issues.apache.org/jira/browse/HBASE-9908 Project: HBase Issue Type: Bug Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.98.0, 0.96.1 Attachments: hbase-9908_v1.patch Some of the unit tests related to classloasing and filesystem are failing on windows. {code} org.apache.hadoop.hbase.coprocessor.TestClassLoading.testHBase3810 org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLocalFS org.apache.hadoop.hbase.coprocessor.TestClassLoading.testPrivateClassLoader org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromRelativeLibDirInJar org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromLibDirInJar org.apache.hadoop.hbase.coprocessor.TestClassLoading.testClassLoadingFromHDFS org.apache.hadoop.hbase.backup.TestHFileArchiving.testCleaningRace org.apache.hadoop.hbase.regionserver.wal.TestDurability.testDurability org.apache.hadoop.hbase.regionserver.wal.TestHLog.testMaintainOrderWithConcurrentWrites org.apache.hadoop.hbase.security.access.TestAccessController.testBulkLoad org.apache.hadoop.hbase.regionserver.TestHRegion.testRecoveredEditsReplayCompaction org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testRecoveredEditsReplayCompaction org.apache.hadoop.hbase.util.TestFSUtils.testRenameAndSetModifyTime {code} The root causes are: - Using local file name for referring to hdfs paths (HBASE-6830) - Classloader using the wrong file system - StoreFile readers not being closed (for unfinished compaction) -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9920) Lower OK_FINDBUGS_WARNINGS in test-patch.properties
[ https://issues.apache.org/jira/browse/HBASE-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817635#comment-13817635 ] Hudson commented on HBASE-9920: --- SUCCESS: Integrated in HBase-TRUNK #4675 (See [https://builds.apache.org/job/HBase-TRUNK/4675/]) HBASE-9920 Lower OK_FINDBUGS_WARNINGS in test-patch.properties (tedyu: rev 1540059) * /hbase/trunk/dev-support/test-patch.properties Lower OK_FINDBUGS_WARNINGS in test-patch.properties --- Key: HBASE-9920 URL: https://issues.apache.org/jira/browse/HBASE-9920 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.98.0 Attachments: 9920.txt HBASE-9903 removed generated classes from findbugs checking. OK_FINDBUGS_WARNINGS in test-patch.properties should be lowered. According to https://builds.apache.org/job/PreCommit-HBASE-Build/7776/artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html , there were: 3 warnings for org.apache.hadoop.hbase.generated classes 19 warnings for org.apache.hadoop.hbase.tmpl classes -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9918) MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery
[ https://issues.apache.org/jira/browse/HBASE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817638#comment-13817638 ] Hadoop QA commented on HBASE-9918: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612751/HBase-9918.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 20 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7799//console This message is automatically generated. MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery --- Key: HBASE-9918 URL: https://issues.apache.org/jira/browse/HBASE-9918 Project: HBase Issue Type: Bug Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: HBase-9918.patch TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry always failed at the following verification for me in my dev env(you have to run the single test not the whole TestZooKeeper suite to reproduce) {code} assertEquals(Number of rows should be equal to number of puts., numberOfPuts, numberOfRows); {code} We missed two ZK listeners after master recovery MasterAddressTracker ZKNamespaceManager. My current patch is to fix the JIRA issue while I'm wondering if we should totally remove the master failover implementation when ZK session expired because this causes reinitialize HMaster partially which is error prone and not a clean state to start from. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-7639) Enable online schema update by default
[ https://issues.apache.org/jira/browse/HBASE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-7639: - Labels: online_schema_change (was: ) Enable online schema update by default --- Key: HBASE-7639 URL: https://issues.apache.org/jira/browse/HBASE-7639 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Enis Soztutar Assignee: Elliott Clark Labels: online_schema_change Fix For: 0.98.0, 0.95.2 Attachments: HBASE-7639-0.patch After we get HBASE-7305 and HBASE-7546, things will become stable enough to enable online schema update to be enabled by default. {code} property namehbase.online.schema.update.enable/name valuefalse/value description Set true to enable online schema changes. This is an experimental feature.·· There are known issues modifying table schemas at the same time a region split is happening so your table needs to be quiescent or else you have to be running with splits disabled. /description /property {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-8726) Create an Integration Test for online schema change
[ https://issues.apache.org/jira/browse/HBASE-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-8726: - Labels: online_schema_change (was: ) Create an Integration Test for online schema change --- Key: HBASE-8726 URL: https://issues.apache.org/jira/browse/HBASE-8726 Project: HBase Issue Type: Bug Components: Admin Affects Versions: 0.98.0, 0.95.1 Reporter: Elliott Clark Assignee: Elliott Clark Labels: online_schema_change Fix For: 0.95.2 Attachments: HBASE-8726-0.patch, HBASE-8726-1.patch, HBASE-8726-2.patch, HBASE-8726-3.patch, HBASE-8726-4.patch With table locks in place it should be time to start really testing online table schema changes. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-5678) Dynamic configuration capability for Hbase.
[ https://issues.apache.org/jira/browse/HBASE-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-5678: - Description: I think, some preperties can be dynamically configured without restart of the nodes. This is an umberilla JIRA for this Feature. In Hadoop we already had such feature but not yet implemented by nodes. I think we can have the similar base framework here and can implemented by nodes. So, that whatever properies are allowed to reconfigurable, should be able to reconfigure with new values with out restarting the node. I will come up with some design doc with noeds implementation and will raise subtasks for each. was: I think, some preperties can be danamically configured without restart of the nodes. This is an umberilla JIRA for this Feature. In Hadoop we already had such feature but not yet implemented by nodes. I think we can have the similar base framework here and can implemented by nodes. So, that whatever properies are allowed to reconfigurable, should be able to reconfigure with new values with out restarting the node. I will come up with some design doc with noeds implementation and will raise subtasks for each. Dynamic configuration capability for Hbase. --- Key: HBASE-5678 URL: https://issues.apache.org/jira/browse/HBASE-5678 Project: HBase Issue Type: New Feature Components: master, regionserver, util Affects Versions: 0.95.2 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G I think, some preperties can be dynamically configured without restart of the nodes. This is an umberilla JIRA for this Feature. In Hadoop we already had such feature but not yet implemented by nodes. I think we can have the similar base framework here and can implemented by nodes. So, that whatever properies are allowed to reconfigurable, should be able to reconfigure with new values with out restarting the node. I will come up with some design doc with noeds implementation and will raise subtasks for each. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9395) Disable Schema Change on 0.96
[ https://issues.apache.org/jira/browse/HBASE-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-9395: - Labels: online_schema_change (was: ) Disable Schema Change on 0.96 - Key: HBASE-9395 URL: https://issues.apache.org/jira/browse/HBASE-9395 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.98.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Labels: online_schema_change Fix For: 0.96.0 Attachments: HBASE-9395-95-0.patch Running LoadTestAndVerify fails when the chaos monkey is slowDeterministic. When commenting out all of the schema change actions everything passes. We should disable the schema change until we can be 100% sure of data integrity. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-8775) Throttle online schema changes.
[ https://issues.apache.org/jira/browse/HBASE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-8775: - Labels: online_schema_change (was: ) Throttle online schema changes. --- Key: HBASE-8775 URL: https://issues.apache.org/jira/browse/HBASE-8775 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.89-fb Reporter: Shane Hogan Priority: Minor Labels: online_schema_change Fix For: 0.89-fb Throttle the open and close of the regions after an online schema change -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9407) Online Schema Change causes Test Load and Verify to fail.
[ https://issues.apache.org/jira/browse/HBASE-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-9407: - Labels: online_schema_change (was: ) Online Schema Change causes Test Load and Verify to fail. - Key: HBASE-9407 URL: https://issues.apache.org/jira/browse/HBASE-9407 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Elliott Clark Labels: online_schema_change -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-4741) Online schema change doesn't return errors
[ https://issues.apache.org/jira/browse/HBASE-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-4741: - Labels: online_schema_change (was: ) Online schema change doesn't return errors -- Key: HBASE-4741 URL: https://issues.apache.org/jira/browse/HBASE-4741 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Jean-Daniel Cryans Assignee: stack Priority: Critical Labels: online_schema_change Fix For: 0.92.0 Attachments: 4741-v2.txt, 4741-v3.txt, 4741-v4.txt, 4741-v5.txt, 4741-v6.txt, 4741-v7.txt, 4741.txt Still after the fun I had over in HBASE-4729, I tried to finish altering my table (remove a family) since only half of it was changed so I did this: {quote} hbase(main):002:0 alter 'TestTable', NAME = 'allo', METHOD = 'delete' Updating all regions with the new schema... 244/244 regions updated. Done. 0 row(s) in 1.2480 seconds {quote} Nice it all looks good, but over in the master log: {quote} org.apache.hadoop.hbase.InvalidFamilyOperationException: Family 'allo' does not exist so cannot be deleted at org.apache.hadoop.hbase.master.handler.TableDeleteFamilyHandler.handleTableOperation(TableDeleteFamilyHandler.java:56) at org.apache.hadoop.hbase.master.handler.TableEventHandler.process(TableEventHandler.java:86) at org.apache.hadoop.hbase.master.HMaster.deleteColumn(HMaster.java:1011) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:348) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1242) {quote} Maybe we should do checks before launching the async task. Marking critical as this is a regression. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-5335) Dynamic Schema Configurations
[ https://issues.apache.org/jira/browse/HBASE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-5335: - Labels: configuration online_schema_change schema (was: configuration schema) Dynamic Schema Configurations - Key: HBASE-5335 URL: https://issues.apache.org/jira/browse/HBASE-5335 Project: HBase Issue Type: New Feature Reporter: Nicolas Spiegelberg Assignee: Nicolas Spiegelberg Labels: configuration, online_schema_change, schema Fix For: 0.94.7, 0.95.0 Attachments: ASF.LICENSE.NOT.GRANTED--D2247.1.patch, ASF.LICENSE.NOT.GRANTED--D2247.2.patch, ASF.LICENSE.NOT.GRANTED--D2247.3.patch, ASF.LICENSE.NOT.GRANTED--D2247.4.patch, ASF.LICENSE.NOT.GRANTED--D2247.5.patch, ASF.LICENSE.NOT.GRANTED--D2247.6.patch, ASF.LICENSE.NOT.GRANTED--D2247.7.patch, ASF.LICENSE.NOT.GRANTED--D2247.8.patch, HBASE-5335-trunk-2.patch, HBASE-5335-trunk-3.patch, HBASE-5335-trunk-3.patch, HBASE-5335-trunk-4.patch, HBASE-5335-trunk.patch Currently, the ability for a core developer to add per-table per-CF configuration settings is very heavyweight. You need to add a reserved keyword all the way up the stack you have to support this variable long-term if you're going to expose it explicitly to the user. This has ended up with using Configuration.get() a lot because it is lightweight and you can tweak settings while you're trying to understand system behavior [since there are many config params that may never need to be tuned]. We need to add the ability to put read arbitrary KV settings in the HBase schema. Combined with online schema change, this will allow us to safely iterate on configuration settings. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-7236) add per-table/per-cf configuration via metadata
[ https://issues.apache.org/jira/browse/HBASE-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Shulman updated HBASE-7236: - Labels: online_schema_change (was: ) add per-table/per-cf configuration via metadata --- Key: HBASE-7236 URL: https://issues.apache.org/jira/browse/HBASE-7236 Project: HBase Issue Type: Umbrella Affects Versions: 0.95.2 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Labels: online_schema_change Fix For: 0.95.0 Attachments: HBASE-7236-PROTOTYPE-v1.patch, HBASE-7236-PROTOTYPE.patch, HBASE-7236-PROTOTYPE.patch, HBASE-7236-v0.patch, HBASE-7236-v1.patch, HBASE-7236-v2.patch, HBASE-7236-v3.patch, HBASE-7236-v4.patch, HBASE-7236-v5.patch, HBASE-7236-v6.patch, HBASE-7236-v6.patch Regardless of the compaction policy, it makes sense to have separate configuration for compactions for different tables and column families, as their access patterns and workloads can be different. In particular, for tiered compactions that are being ported from 0.89-fb branch it is necessary to have, to use it properly. We might want to add support for compaction configuration via metadata on table/cf. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9407) Online Schema Change causes Test Load and Verify to fail.
[ https://issues.apache.org/jira/browse/HBASE-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817674#comment-13817674 ] Aleksandr Shulman commented on HBASE-9407: -- Hi Elliott, is this issue still occurring? If so, can you add more specifics about the failure mode, how often it occurs, potential root causes, etc. Online Schema Change causes Test Load and Verify to fail. - Key: HBASE-9407 URL: https://issues.apache.org/jira/browse/HBASE-9407 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Elliott Clark Labels: online_schema_change -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HBASE-9835) Define C interface of HBase Client synchronous APIs
[ https://issues.apache.org/jira/browse/HBASE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-9835: - Attachment: HBASE-9835-0.patch Here's what I was thinking for C api. * Fully async * Call backs all take a void pointer for the user to supply their own data that will be needed. * No explicit batching. Since everything is async there's no need for it. * All mutations start with the struct hb_mutation_type * I went with the hidden struct. Even though it necessitates a heap allocation I have been convinced that the encapsulation is worth it. * It has set methods. There are no get methods yet. I'm not sure if they will be needed. * I think the freeing of backing buffers should be the user's responsibility. There are destroy methods but they are just for implementation created resources. * Tests are included using GTest. * LibEV was chosen as it provides a good c++ header so creating the underlying rpc implementation will be OO. * The CMake modules might need to be re-written. I'll put this up on rb. Define C interface of HBase Client synchronous APIs --- Key: HBASE-9835 URL: https://issues.apache.org/jira/browse/HBASE-9835 Project: HBase Issue Type: Sub-task Components: Client Reporter: Aditya Kishore Assignee: Aditya Kishore Labels: C Attachments: HBASE-9835-0.patch Creating this as a sub task of HBASE-1015 to define Define C language interface of HBase Client synchronous APIs. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9407) Online Schema Change causes Test Load and Verify to fail.
[ https://issues.apache.org/jira/browse/HBASE-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817690#comment-13817690 ] Elliott Clark commented on HBASE-9407: -- It does. [~jxiang] is working on it. It seems to occur with encoding changes using online schema change. Online Schema Change causes Test Load and Verify to fail. - Key: HBASE-9407 URL: https://issues.apache.org/jira/browse/HBASE-9407 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Elliott Clark Labels: online_schema_change -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817695#comment-13817695 ] Mikhail Antonov commented on HBASE-2016: Is there any update on that, particularly around LDAP authentication for end users? [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9407) Online Schema Change causes Test Load and Verify to fail.
[ https://issues.apache.org/jira/browse/HBASE-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817698#comment-13817698 ] Jimmy Xiang commented on HBASE-9407: Yes, online encoding change could cause some test to fail. We are looking into it. Other online schema changes should be fine. Online Schema Change causes Test Load and Verify to fail. - Key: HBASE-9407 URL: https://issues.apache.org/jira/browse/HBASE-9407 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Elliott Clark Labels: online_schema_change -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817699#comment-13817699 ] Andrew Purtell commented on HBASE-2016: --- bq. Is there any update on that, particularly around LDAP authentication for end users? I suppose we can close this because the scope of the OP has been addressed by a bunch of other JIRAs. Regarding LDAP in particular, we can use Hadoop's support for LDAP in group mapping (that was HADOOP-8121). Beyond that could be the topic of a new enhancement JIRA. [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817703#comment-13817703 ] Mikhail Antonov commented on HBASE-2016: I see, thanks for the comment Andrew. I'm actually looking for the deployment picture, when I can avoid having kerberos principals for end customer of HBase Shell, but it looks like it's not supported now? What I'm trying to do is following: - Namenode/JT are secured already and have kerberos principals - HiveServer2 is already secured in our installation, and configured in such a way that HS itself has kerberos principals, but end users log in via LDAP and their credentials are passed to NN/JT as proxied kerberos tickets. So impersonation works just fine, like in Oozie and other service-style entities - HBase REST seems to support impersonation But, I don't see an option to allow end users of HBase Shell (John Smith) to authenticate via LDAP (without creating trusted bridge between Kerberos and AD, since it may be arbitrary LDAP server), and then get his credentials to be proxied via some service Kerberos principal and to be passed to HBase (something like jsmith via hbase-shell-user/domain@REALM). Is there any support for that, or am I missing something? [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817707#comment-13817707 ] Mikhail Antonov commented on HBASE-2016: I know HBase resorts to Hadoop group mapping service, which can be plugged to NN, but authentication of user itself (username/password/domain) can't seem to go directly thru LDAP instance without requiring user principal to be in Kerberos, right? [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-9918) MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery
[ https://issues.apache.org/jira/browse/HBASE-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817715#comment-13817715 ] Ted Yu commented on HBASE-9918: --- {code} +TEST_UTIL.startMiniDFSCluster(1); TEST_UTIL.startMiniZKCluster(); conf.setBoolean(dfs.support.append, true); conf.setInt(HConstants.ZK_SESSION_TIMEOUT, 1000); conf.setClass(HConstants.HBASE_MASTER_LOADBALANCER_CLASS, MockLoadBalancer.class, LoadBalancer.class); -TEST_UTIL.startMiniCluster(2); {code} Is the above change in number of slaves intentional ? MasterAddressTracker ZKNamespaceManager ZK listeners are missed after master recovery --- Key: HBASE-9918 URL: https://issues.apache.org/jira/browse/HBASE-9918 Project: HBase Issue Type: Bug Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Attachments: HBase-9918.patch TestZooKeeper#testRegionAssignmentAfterMasterRecoveryDueToZKExpiry always failed at the following verification for me in my dev env(you have to run the single test not the whole TestZooKeeper suite to reproduce) {code} assertEquals(Number of rows should be equal to number of puts., numberOfPuts, numberOfRows); {code} We missed two ZK listeners after master recovery MasterAddressTracker ZKNamespaceManager. My current patch is to fix the JIRA issue while I'm wondering if we should totally remove the master failover implementation when ZK session expired because this causes reinitialize HMaster partially which is error prone and not a clean state to start from. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817716#comment-13817716 ] Andrew Purtell commented on HBASE-2016: --- bq. can't seem to go directly thru LDAP instance without requiring user principal to be in Kerberos, right? Correct, it's fair to say direct use of kerberos is the only option today, as with the case of Hadoop in general. I still think this jira could be resolved. Additional authentication options for Hadoop are under discussion in various Hadoop JIRAs, which we can pick up when they become available. [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817723#comment-13817723 ] Mikhail Antonov commented on HBASE-2016: Yeah, I don't question whether this jira is finished or not, I just thought it may be appropriate place to ask a few questions which I couldn't answer after reading docs, JIRA and sources. * it's fair to say direct use of kerberos is the only option today, as with the case of Hadoop in general.* Various ecosystem services like Hive or Oozie do support impersonation of end users, thus bypassing that, and allow end users to be authenticated via pluggable authentication (which may authenticate users against ldap, mysql database and such). But for HBase Shell there's no impersonation possible as of now, right, and there're no developments in this direction? [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817740#comment-13817740 ] Andrew Purtell commented on HBASE-2016: --- bq. Various ecosystem services like Hive or Oozie do support impersonation of end users, thus bypassing that, and allow end users to be authenticated via pluggable authentication (which may authenticate users against ldap, mysql database and such). But for HBase Shell there's no impersonation possible as of now Hive or Oozie impersonate by utilizing a service process registered with the NN in the NN config to be afforded the elevated privilege of impersonation, and then they do their own thing. The HBase shell is a regular HBase client wrapped with an HBase DSL within the JRuby IRB, which could run anywhere, and cannot be trusted in that way. If I understand correctly, what you could use is some kind of administration server which would reside at a fixed location and could be trusted to impersonate, and then the shell could be modified to proxy administrative commands through it. - Yes? [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817743#comment-13817743 ] Mikhail Antonov commented on HBASE-2016: That is exactly what I need for my requirements, yes. I thought of trying to relay HBase shell somehow thru HBase REST server (which can impersonate) or so, but there is no obvious way which I see now to do that. Do you think it's possible, or just none ever asked to support such thing? [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (HBASE-2016) [DAC] Authentication
[ https://issues.apache.org/jira/browse/HBASE-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817750#comment-13817750 ] Gary Helmling commented on HBASE-2016: -- It would certainly be possible to implement your own proxy, as Andy describes, which would need its own kerberos credentials and would perform its own authentication of clients. But that doesn't seem like core HBase functionality. Instead it's putting a proxy in place in order to circumvent security. I think the direction for HBase will be to support pluggable authentication of clients at the RPC layer, using the same mechanisms under development for Hadoop, but unfortunately that may be some time away. [DAC] Authentication Key: HBASE-2016 URL: https://issues.apache.org/jira/browse/HBASE-2016 Project: HBase Issue Type: Sub-task Components: security Reporter: Andrew Purtell Assignee: Gary Helmling Follow what Hadoop is doing. Authentication via JAAS: http://issues.apache.org/jira/browse/HADOOP-6299 http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Should support Kerberos, Unix, and LDAP authentication options. Integrate with authentication mechanisms for IPC and HDFS. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HBASE-9928) TestHRegion should clean up test-data directory upon completion
Ted Yu created HBASE-9928: - Summary: TestHRegion should clean up test-data directory upon completion Key: HBASE-9928 URL: https://issues.apache.org/jira/browse/HBASE-9928 Project: HBase Issue Type: Test Reporter: Ted Yu TestHRegion leaves some files behind in hbase-server/target/test-data directory after tests complete. e.g. at the end of testRegionInfoFileCreation: {code} // Verify that the .regioninfo file is still there assertTrue(HRegionFileSystem.REGION_INFO_FILE + should be present in the region dir, fs.exists(new Path(regionDir, HRegionFileSystem.REGION_INFO_FILE))); {code} test-data directory should be cleaned upon completion. I noticed this when looping TestHRegion in order to reproduce test failure. -- This message was sent by Atlassian JIRA (v6.1#6144)