[jira] [Commented] (HBASE-5970) Improve the AssignmentManager#updateTimer and speed up handling opened event
[ https://issues.apache.org/jira/browse/HBASE-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489950#comment-13489950 ] yang ming commented on HBASE-5970: -- [~zjushch] Or does this path depend on other modification? Thanks. > Improve the AssignmentManager#updateTimer and speed up handling opened event > > > Key: HBASE-5970 > URL: https://issues.apache.org/jira/browse/HBASE-5970 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: chunhui shen >Assignee: chunhui shen >Priority: Critical > Fix For: 0.96.0 > > Attachments: 5970v3.patch, HBASE-5970.patch, HBASE-5970v2.patch, > HBASE-5970v3.patch, HBASE-5970v4.patch, HBASE-5970v4.patch > > > We found handing opened event very slow in the environment with lots of > regions. > The problem is the slow AssignmentManager#updateTimer. > We do the test for bulk assigning 10w (i.e. 100k) regions, the whole process > of bulk assigning took 1 hours. > 2012-05-06 20:31:49,201 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning 10 > region(s) round-robin across 5 server(s) > 2012-05-06 21:26:32,103 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning done > I think we could do the improvement for the AssignmentManager#updateTimer: > Make a thread do this work. > After the improvement, it took only 4.5mins > 2012-05-07 11:03:36,581 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning 10 > region(s) across 5 server(s), retainAssignment=true > 2012-05-07 11:07:57,073 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning done -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6389) Modify the conditions to ensure that Master waits for sufficient number of Region Servers before starting region assignments
[ https://issues.apache.org/jira/browse/HBASE-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489949#comment-13489949 ] Lars Hofhansl commented on HBASE-6389: -- I'm going to commit the 0.94 patch. > Modify the conditions to ensure that Master waits for sufficient number of > Region Servers before starting region assignments > > > Key: HBASE-6389 > URL: https://issues.apache.org/jira/browse/HBASE-6389 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.94.0, 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Critical > Fix For: 0.94.3, 0.96.0 > > Attachments: HBASE-6389_0.94.patch, HBASE-6389_trunk.patch, > HBASE-6389_trunk.patch, HBASE-6389_trunk.patch, HBASE-6389_trunk_v2.patch, > HBASE-6389_trunk_v2.patch, org.apache.hadoop.hbase.TestZooKeeper-output.txt, > testReplication.jstack > > > Continuing from HBASE-6375. > It seems I was mistaken in my assumption that changing the value of > "hbase.master.wait.on.regionservers.mintostart" to a sufficient number (from > default of 1) can help prevent assignment of all regions to one (or a small > number of) region server(s). > While this was the case in 0.90.x and 0.92.x, the behavior has changed in > 0.94.0 onwards to address HBASE-4993. > From 0.94.0 onwards, Master will proceed immediately after the timeout has > lapsed, even if "hbase.master.wait.on.regionservers.mintostart" has not > reached. > Reading the current conditions of waitForRegionServers() clarifies it > {code:title=ServerManager.java (trunk rev:1360470)} > > 581 /** > 582 * Wait for the region servers to report in. > 583 * We will wait until one of this condition is met: > 584 * - the master is stopped > 585 * - the 'hbase.master.wait.on.regionservers.timeout' is reached > 586 * - the 'hbase.master.wait.on.regionservers.maxtostart' number of > 587 *region servers is reached > 588 * - the 'hbase.master.wait.on.regionservers.mintostart' is reached > AND > 589 * there have been no new region server in for > 590 * 'hbase.master.wait.on.regionservers.interval' time > 591 * > 592 * @throws InterruptedException > 593 */ > 594 public void waitForRegionServers(MonitoredTask status) > 595 throws InterruptedException { > > > 612 while ( > 613 !this.master.isStopped() && > 614 slept < timeout && > 615 count < maxToStart && > 616 (lastCountChange+interval > now || count < minToStart) > 617 ){ > > {code} > So with the current conditions, the wait will end as soon as timeout is > reached even lesser number of RS have checked-in with the Master and the > master will proceed with the region assignment among these RSes alone. > As mentioned in > -[HBASE-4993|https://issues.apache.org/jira/browse/HBASE-4993?focusedCommentId=13237196#comment-13237196]-, > and I concur, this could have disastrous effect in large cluster especially > now that MSLAB is turned on. > To enforce the required quorum as specified by > "hbase.master.wait.on.regionservers.mintostart" irrespective of timeout, > these conditions need to be modified as following > {code:title=ServerManager.java} > .. > /** >* Wait for the region servers to report in. >* We will wait until one of this condition is met: >* - the master is stopped >* - the 'hbase.master.wait.on.regionservers.maxtostart' number of >*region servers is reached >* - the 'hbase.master.wait.on.regionservers.mintostart' is reached AND >* there have been no new region server in for >* 'hbase.master.wait.on.regionservers.interval' time AND >* the 'hbase.master.wait.on.regionservers.timeout' is reached >* >* @throws InterruptedException >*/ > public void waitForRegionServers(MonitoredTask status) > .. > .. > int minToStart = this.master.getConfiguration(). > getInt("hbase.master.wait.on.regionservers.mintostart", 1); > int maxToStart = this.master.getConfiguration(). > getInt("hbase.master.wait.on.regionservers.maxtostart", > Integer.MAX_VALUE); > if (maxToStart < minToStart) { > maxToStart = minToStart; > } > .. > .. > while ( > !this.master.isStopped() && > count < maxToStart && > (lastCountChange+interval > now || timeout > slept || count < > minToStart) > ){ > .. > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489948#comment-13489948 ] Lars Hofhansl commented on HBASE-7086: -- The 0.94 part of this is good, right? Let's move the 0.96 part to a new jira, so I can close this for the next RC. (Unless the trunk part gets resolve soon) > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, 7086-trunk-v3.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6416) hbck dies on NPE when a region folder exists but the table does not
[ https://issues.apache.org/jira/browse/HBASE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489947#comment-13489947 ] Jie Huang commented on HBASE-6416: -- I am taking sick leave from Oct. 31 to Nov. 13. Any urgency, please call 18964958151. Sorry for any action delay during these days. Thanks. > hbck dies on NPE when a region folder exists but the table does not > --- > > Key: HBASE-6416 > URL: https://issues.apache.org/jira/browse/HBASE-6416 > Project: HBase > Issue Type: Bug >Reporter: Jean-Daniel Cryans > Fix For: 0.96.0, 0.94.4 > > Attachments: hbase-6416.patch, hbase-6416-v1.patch > > > This is what I'm getting for leftover data that has no .regioninfo > First: > {quote} > 12/07/17 23:13:37 WARN util.HBaseFsck: Failed to read .regioninfo file for > region null > java.io.FileNotFoundException: File does not exist: > /hbase/stumble_info_urlid_user/bd5f6cfed674389b4d7b8c1be227cb46/.regioninfo > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1822) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1813) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:544) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:187) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:456) > at > org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:611) > at > org.apache.hadoop.hbase.util.HBaseFsck.access$2200(HBaseFsck.java:140) > at > org.apache.hadoop.hbase.util.HBaseFsck$WorkItemHdfsRegionInfo.call(HBaseFsck.java:2882) > at > org.apache.hadoop.hbase.util.HBaseFsck$WorkItemHdfsRegionInfo.call(HBaseFsck.java:2866) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > {quote} > Then it hangs on: > {quote} > 12/07/17 23:13:39 INFO util.HBaseFsck: Attempting to handle orphan hdfs dir: > hdfs://sfor3s24:10101/hbase/stumble_info_urlid_user/bd5f6cfed674389b4d7b8c1be227cb46 > 12/07/17 23:13:39 INFO util.HBaseFsck: checking orphan for table null > Exception in thread "main" java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$100(HBaseFsck.java:1634) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:435) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:408) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:529) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:313) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:386) > at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3227) > {quote} > The NPE is sent by: > {code} > Preconditions.checkNotNull("Table " + tableName + "' not present!", > tableInfo); > {code} > I wonder why the condition checking was added if we don't handle it... In any > case hbck dies but it hangs because there are some non-daemon hanging around. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489946#comment-13489946 ] Hadoop QA commented on HBASE-7066: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551969/7066-addendum.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 86 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3219//console This message is automatically generated. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: 7066-addendum.txt, HBASE-7066_94.patch, > HBASE-7066_trunk.patch, HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6588) enable table throws npe and leaves trash in zk in competition with delete table
[ https://issues.apache.org/jira/browse/HBASE-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6588: - Fix Version/s: (was: 0.94.3) 0.94.4 Looks like this got abandoned. Moving to 0.94.4. > enable table throws npe and leaves trash in zk in competition with delete > table > --- > > Key: HBASE-6588 > URL: https://issues.apache.org/jira/browse/HBASE-6588 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.0 >Reporter: Zhou wenjian >Assignee: Zhou wenjian > Fix For: 0.94.4 > > Attachments: HBASE-6588-trunk.patch, HBASE-6588-trunk-v2.patch, > HBASE-6588-trunk-v3.patch, HBASE-6588-trunk-v4.patch, > HBASE-6588-trunk-v5.patch, HBASE-6588-trunk-v6.patch, > HBASE-6588-trunk-v6.patch > > > 2012-08-15 19:23:36,178 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Creating scanner over .META. starting at key 'test,,' > 2012-08-15 19:23:36,178 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Advancing internal scanner to startKey at 'test,,' > 2012-08-15 19:24:09,180 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Creating scanner over .META. starting at key '' > 2012-08-15 19:24:09,180 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Advancing internal scanner to startKey at '' > 2012-08-15 19:24:09,183 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Finished with scanning at {NAME => '.META.,,1', STARTKEY => '', ENDKEY => '', > ENCODED => 1028785192,} > 2012-08-15 19:24:09,183 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: > Scanned 2 catalog row(s) and gc'd 0 unreferenced parent region(s) > 2012-08-15 19:25:12,260 DEBUG > org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Deleting region > test,,1345029764571.d1e24b251ca6286c840a9a5f571b7db1. from META and FS > 2012-08-15 19:25:12,263 INFO org.apache.hadoop.hbase.catalog.MetaEditor: > Deleted region test,,1345029764571.d1e24b251ca6286c840a9a5f571b7db1. from META > 2012-08-15 19:25:12,265 INFO > org.apache.hadoop.hbase.master.handler.EnableTableHandler: Attemping to > enable the table test > 2012-08-15 19:25:12,265 WARN org.apache.hadoop.hbase.zookeeper.ZKTable: > Moving table test state to enabling but was not first in disabled state: null > 2012-08-15 19:25:12,267 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Creating scanner over .META. starting at key 'test,,' > 2012-08-15 19:25:12,267 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Advancing internal scanner to startKey at 'test,,' > 2012-08-15 19:25:12,270 DEBUG org.apache.hadoop.hbase.client.ClientScanner: > Finished with scanning at {NAME => '.META.,,1', STARTKEY => '', ENDKEY => '', > ENCODED => 1028785192,} > 2012-08-15 19:25:12,270 ERROR org.apache.hadoop.hbase.executor.EventHandler: > Caught throwable while processing event C_M_ENABLE_TABLE > java.lang.NullPointerException > at > org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:116) > at > org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:97) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > table is disabled now, then we enable and delete the table at the same time. > Since the thread num of MASTER_TABLE_OPERATIONS is 1 by default. The two > operations are serial in master.Before deletetable deletes all the regions in > meta, CreateTableHandler ships the check of tableExists,then it will block > until deletetable finishs, then CreateTableHandler will set zk enabling, and > find no data in meta: > regionsInMeta = MetaReader.getTableRegions(this.ct, tableName, true); > int countOfRegionsInTable = regionsInMeta.size(); > npe will be throwed here. And we could not create the same table anymore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5970) Improve the AssignmentManager#updateTimer and speed up handling opened event
[ https://issues.apache.org/jira/browse/HBASE-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489944#comment-13489944 ] yang ming commented on HBASE-5970: -- [~zjushch] I have tried this patch on 0.94.2 with 100,000 regions(one empty table) and 4RS.I restarted the cluster,but found handing opened event still very slow. I did not see any improvement,can you show us your test environment? Thanks. > Improve the AssignmentManager#updateTimer and speed up handling opened event > > > Key: HBASE-5970 > URL: https://issues.apache.org/jira/browse/HBASE-5970 > Project: HBase > Issue Type: Improvement > Components: master >Reporter: chunhui shen >Assignee: chunhui shen >Priority: Critical > Fix For: 0.96.0 > > Attachments: 5970v3.patch, HBASE-5970.patch, HBASE-5970v2.patch, > HBASE-5970v3.patch, HBASE-5970v4.patch, HBASE-5970v4.patch > > > We found handing opened event very slow in the environment with lots of > regions. > The problem is the slow AssignmentManager#updateTimer. > We do the test for bulk assigning 10w (i.e. 100k) regions, the whole process > of bulk assigning took 1 hours. > 2012-05-06 20:31:49,201 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning 10 > region(s) round-robin across 5 server(s) > 2012-05-06 21:26:32,103 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning done > I think we could do the improvement for the AssignmentManager#updateTimer: > Make a thread do this work. > After the improvement, it took only 4.5mins > 2012-05-07 11:03:36,581 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning 10 > region(s) across 5 server(s), retainAssignment=true > 2012-05-07 11:07:57,073 INFO > org.apache.hadoop.hbase.master.AssignmentManager: Bulk assigning done -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6331) Problem with HBCK mergeOverlaps
[ https://issues.apache.org/jira/browse/HBASE-6331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6331: - Fix Version/s: (was: 0.94.3) Removing from 0.94. Pull back if you disagree. > Problem with HBCK mergeOverlaps > --- > > Key: HBASE-6331 > URL: https://issues.apache.org/jira/browse/HBASE-6331 > Project: HBase > Issue Type: Bug > Components: hbck >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 0.96.0 > > Attachments: HBASE-6331_94.patch, HBASE-6331_Trunk.patch > > > In HDFSIntegrityFixer#mergeOverlaps(), there is a logic to create the final > range of the region after the overlap. > I can see one issue with this code > {code} > if (RegionSplitCalculator.BYTES_COMPARATOR > .compare(hi.getEndKey(), range.getSecond()) > 0) { > range.setSecond(hi.getEndKey()); > } > {code} > Here suppose the regions include the end region for which the endKey will be > empty, we need to get finally the range with endkey as empty byte[] > But as per the above logic it will see that any other key greater than the > empty byte[] and will set it. > Finally the new region created will not get endkey as empty byte[] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6416) hbck dies on NPE when a region folder exists but the table does not
[ https://issues.apache.org/jira/browse/HBASE-6416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6416: - Fix Version/s: (was: 0.94.3) 0.94.4 > hbck dies on NPE when a region folder exists but the table does not > --- > > Key: HBASE-6416 > URL: https://issues.apache.org/jira/browse/HBASE-6416 > Project: HBase > Issue Type: Bug >Reporter: Jean-Daniel Cryans > Fix For: 0.96.0, 0.94.4 > > Attachments: hbase-6416.patch, hbase-6416-v1.patch > > > This is what I'm getting for leftover data that has no .regioninfo > First: > {quote} > 12/07/17 23:13:37 WARN util.HBaseFsck: Failed to read .regioninfo file for > region null > java.io.FileNotFoundException: File does not exist: > /hbase/stumble_info_urlid_user/bd5f6cfed674389b4d7b8c1be227cb46/.regioninfo > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1822) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1813) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:544) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:187) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:456) > at > org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:611) > at > org.apache.hadoop.hbase.util.HBaseFsck.access$2200(HBaseFsck.java:140) > at > org.apache.hadoop.hbase.util.HBaseFsck$WorkItemHdfsRegionInfo.call(HBaseFsck.java:2882) > at > org.apache.hadoop.hbase.util.HBaseFsck$WorkItemHdfsRegionInfo.call(HBaseFsck.java:2866) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > {quote} > Then it hangs on: > {quote} > 12/07/17 23:13:39 INFO util.HBaseFsck: Attempting to handle orphan hdfs dir: > hdfs://sfor3s24:10101/hbase/stumble_info_urlid_user/bd5f6cfed674389b4d7b8c1be227cb46 > 12/07/17 23:13:39 INFO util.HBaseFsck: checking orphan for table null > Exception in thread "main" java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$100(HBaseFsck.java:1634) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:435) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:408) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:529) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:313) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:386) > at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3227) > {quote} > The NPE is sent by: > {code} > Preconditions.checkNotNull("Table " + tableName + "' not present!", > tableInfo); > {code} > I wonder why the condition checking was added if we don't handle it... In any > case hbck dies but it hangs because there are some non-daemon hanging around. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6305) TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.
[ https://issues.apache.org/jira/browse/HBASE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6305: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to 0.94. Thanks for the patch Himanshu. > TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds. > > > Key: HBASE-6305 > URL: https://issues.apache.org/jira/browse/HBASE-6305 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.92.2, 0.94.1 >Reporter: Jonathan Hsieh >Assignee: Himanshu Vashishtha > Fix For: 0.94.3 > > Attachments: hbase-6305-94.patch, HBASE-6305-94-v2.patch, > HBASE-6305-94-v2.patch, HBASE-6305-v1.patch > > > trunk: mvn clean test -Dhadoop.profile=2.0 -Dtest=TestLocalHBaseCluster > 0.94: mvn clean test -Dhadoop.profile=23 -Dtest=TestLocalHBaseCluster > {code} > testLocalHBaseCluster(org.apache.hadoop.hbase.TestLocalHBaseCluster) Time > elapsed: 0.022 sec <<< ERROR! > java.lang.RuntimeException: Master not initialized after 200 seconds > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:208) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:424) > at > org.apache.hadoop.hbase.TestLocalHBaseCluster.testLocalHBaseCluster(TestLocalHBaseCluster.java:66) > ... > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-6305) TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.
[ https://issues.apache.org/jira/browse/HBASE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl reassigned HBASE-6305: Assignee: Himanshu Vashishtha (was: Jonathan Hsieh) > TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds. > > > Key: HBASE-6305 > URL: https://issues.apache.org/jira/browse/HBASE-6305 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.92.2, 0.94.1 >Reporter: Jonathan Hsieh >Assignee: Himanshu Vashishtha > Fix For: 0.94.3 > > Attachments: hbase-6305-94.patch, HBASE-6305-94-v2.patch, > HBASE-6305-94-v2.patch, HBASE-6305-v1.patch > > > trunk: mvn clean test -Dhadoop.profile=2.0 -Dtest=TestLocalHBaseCluster > 0.94: mvn clean test -Dhadoop.profile=23 -Dtest=TestLocalHBaseCluster > {code} > testLocalHBaseCluster(org.apache.hadoop.hbase.TestLocalHBaseCluster) Time > elapsed: 0.022 sec <<< ERROR! > java.lang.RuntimeException: Master not initialized after 200 seconds > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:208) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:424) > at > org.apache.hadoop.hbase.TestLocalHBaseCluster.testLocalHBaseCluster(TestLocalHBaseCluster.java:66) > ... > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6469) Failure on enable/disable table will cause table state in zk to be left as enabling/disabling until master is restart
[ https://issues.apache.org/jira/browse/HBASE-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6469: - Fix Version/s: (was: 0.94.3) 0.94.4 > Failure on enable/disable table will cause table state in zk to be left as > enabling/disabling until master is restart > - > > Key: HBASE-6469 > URL: https://issues.apache.org/jira/browse/HBASE-6469 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.2, 0.96.0 >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 0.96.0, 0.94.4 > > > In Enable/DisableTableHandler code, if something goes wrong in handling, the > table state in zk is left as ENABLING / DISABLING. After that we cannot force > any more action from the API or CLI, and the only recovery path is restarting > the master. > {code} > if (done) { > // Flip the table to enabled. > this.assignmentManager.getZKTable().setEnabledTable( > this.tableNameStr); > LOG.info("Table '" + this.tableNameStr > + "' was successfully enabled. Status: done=" + done); > } else { > LOG.warn("Table '" + this.tableNameStr > + "' wasn't successfully enabled. Status: done=" + done); > } > {code} > Here, if done is false, the table state is not changed. There is also no way > to set skipTableStateCheck from cli / api. > We have run into this issue a couple of times before. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7009) Port HBaseCluster interface/tests to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489941#comment-13489941 ] Lars Hofhansl commented on HBASE-7009: -- [~enis] Wanna commit to 0.94.3? If not, that's fine and I'll move it to 0.94.4 for next month. > Port HBaseCluster interface/tests to 0.94 > - > > Key: HBASE-7009 > URL: https://issues.apache.org/jira/browse/HBASE-7009 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.94.3 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 0.94.3 > > Attachments: HBASE-7009-p1.patch, HBASE-7009.patch, > HBASE-7009-v2-squashed.patch > > > Need to port. I am porting V5 patch from the original JIRA; I have a > partially ported (V3) patch from Enis with protocol buffers being reverted to > HRegionInterface/HMasterInterface -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6628) Add HBASE-6059 to 0.94 branch
[ https://issues.apache.org/jira/browse/HBASE-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6628: - Fix Version/s: (was: 0.94.3) 0.94.4 Wanna do that Stack? > Add HBASE-6059 to 0.94 branch > - > > Key: HBASE-6628 > URL: https://issues.apache.org/jira/browse/HBASE-6628 > Project: HBase > Issue Type: Task >Reporter: stack > Fix For: 0.94.4 > > > Look at adding HBASE-6059 to 0.94. Its in trunk. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489939#comment-13489939 ] Lars Hofhansl commented on HBASE-7066: -- Hmm... I agree that for security related exception is OK (can be considered an exceptional situation), and also that just return true/false from the coproc does not provide enough information. CannotIgnoreException could work for 0.96. For 0.94 it would break old clients on new servers, right? (the old client would not know about CannotIgnoreException). Another option is to catch and log IOException but to rethrow all HBaseIOExceptions, or maybe just always rethrow DoNotRetryIOException, because the server should not swallow those anyway. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: 7066-addendum.txt, HBASE-7066_94.patch, > HBASE-7066_trunk.patch, HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489938#comment-13489938 ] Hadoop QA commented on HBASE-7086: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551938/7086-trunk-v3.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 85 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3218//console This message is automatically generated. > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, 7086-trunk-v3.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aditya Kishore updated HBASE-7089: -- Attachment: HBASE-7089_trunk_v3.patch Updated patch with test cases > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch, > HBASE-7089_trunk_v3.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6796) Backport HBASE-5547, Don't delete HFiles in backup mode.
[ https://issues.apache.org/jira/browse/HBASE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489936#comment-13489936 ] Lars Hofhansl commented on HBASE-6796: -- Test didn't fail in latest build. If a test is killed by a timeout, does the resource checker do the right thing? > Backport HBASE-5547, Don't delete HFiles in backup mode. > > > Key: HBASE-6796 > URL: https://issues.apache.org/jira/browse/HBASE-6796 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Jesse Yates > Fix For: 0.94.3 > > Attachments: hbase-5547-0.94-backport-v0.patch, hbase-6796-v0.patch, > hbase-6796-v1.patch, hbase-6796-v2.patch > > > See HBASE-5547 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6796) Backport HBASE-5547, Don't delete HFiles in backup mode.
[ https://issues.apache.org/jira/browse/HBASE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489932#comment-13489932 ] Hudson commented on HBASE-6796: --- Integrated in HBase-0.94 #568 (See [https://builds.apache.org/job/HBase-0.94/568/]) HBASE-6796 ADDENDUM, remove spurious time limit from testHFileCleaning (Revision 1405275) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileCleaner.java > Backport HBASE-5547, Don't delete HFiles in backup mode. > > > Key: HBASE-6796 > URL: https://issues.apache.org/jira/browse/HBASE-6796 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Jesse Yates > Fix For: 0.94.3 > > Attachments: hbase-5547-0.94-backport-v0.patch, hbase-6796-v0.patch, > hbase-6796-v1.patch, hbase-6796-v2.patch > > > See HBASE-5547 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489931#comment-13489931 ] Hudson commented on HBASE-7086: --- Integrated in HBase-0.94 #568 (See [https://builds.apache.org/job/HBase-0.94/568/]) HBASE-7086 Enhance ResourceChecker to log stack trace for potentially hanging threads, addendum (Revision 1405207) Result = FAILURE tedyu : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, 7086-trunk-v3.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489928#comment-13489928 ] Hadoop QA commented on HBASE-7089: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551942/HBASE-7089_trunk_v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 85 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestMultiParallel Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3217//console This message is automatically generated. > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7086: -- Status: Patch Available (was: Open) > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, 7086-trunk-v3.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7066: -- Attachment: 7066-addendum.txt Addendum for trunk, demonstrating how master can respond to new exception which cannot be ignored. Open to better naming. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: 7066-addendum.txt, HBASE-7066_94.patch, > HBASE-7066_trunk.patch, HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields
[ https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489922#comment-13489922 ] Ted Yu commented on HBASE-6852: --- There was no related test failure in https://builds.apache.org/view/G-L/view/HBase/job/HBase-0.94/567/ where patch v3 went in. > SchemaMetrics.updateOnCacheHit costs too much while full scanning a table > with all of its fields > > > Key: HBASE-6852 > URL: https://issues.apache.org/jira/browse/HBASE-6852 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 0.94.0 >Reporter: Cheng Hao >Assignee: Cheng Hao >Priority: Minor > Labels: performance > Fix For: 0.94.3 > > Attachments: 6852-0.94_2.patch, 6852-0.94_3.patch, 6852-0.94.txt, > metrics_hotspots.png, onhitcache-trunk.patch > > > The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full > table scanning. > Here is the top 5 hotspots within regionserver while full scanning a table: > (Sorry for the less-well-format) > CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated) > Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit > mask of 0x00 (No unit mask) count 500 > samples %image name symbol name > --- > 9844713.4324 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) > 98447100.000 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) [self] > --- > 45814 6.2510 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) > 45814100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) [self] > --- > 43523 5.9384 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > 43523100.000 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > [self] > --- > 42548 5.8054 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) > 42548100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) [self] > --- > 40572 5.5358 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 > 40572100.000 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6796) Backport HBASE-5547, Don't delete HFiles in backup mode.
[ https://issues.apache.org/jira/browse/HBASE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489921#comment-13489921 ] Ted Yu commented on HBASE-6796: --- Good finding, Lars. I guess it was a typo (missing a 0). >From >https://builds.apache.org/view/G-L/view/HBase/job/HBase-0.94/567/testReport/org.apache.hadoop.hbase.master.cleaner/TestHFileCleaner/testHFileCleaning/: {code} 2012-11-02 20:56:40,589 INFO [pool-1-thread-1] hbase.ResourceChecker(157): after master.cleaner.TestHFileCleaner#testHFileCleaning: 45 threads (was 44), 127 file descriptors (was 127). 0 connections, -thread leak?- 2012-11-02 20:56:40,590 INFO [pool-1-thread-1] hbase.ResourceChecker(180): after master.cleaner.TestHFileCleaner#testHFileCleaning: potentially hanging thread 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): java.lang.Object.wait(Native Method) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): java.lang.Object.wait(Object.java:485) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.ipc.Client.call(Client.java:1056) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): $Proxy9.create(Unknown Source) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 2012-11-02 20:56:40,591 INFO [pool-1-thread-1] hbase.ResourceChecker(186): java.lang.reflect.Method.invoke(Method.java:597) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): $Proxy9.create(Unknown Source) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.(DFSClient.java:3248) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:713) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:182) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555) 2012-11-02 20:56:40,592 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.fs.FileSystem.create(FileSystem.java:498) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:638) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.apache.hadoop.hbase.master.cleaner.TestHFileCleaner.testHFileCleaning(TestHFileCleaner.java:119) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): java.lang.reflect.Method.invoke(Method.java:597) 2012-11-02 20:56:40,593 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) 2012-11-02 20:56:40,594 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) 2012-11-02 20:56:40,594 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) 2012-11-02 20:56:40,594 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) 2012-11-02 20:56:40,594 INFO [pool-1-thread-1] hbase.ResourceChecker(186): org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java
[jira] [Commented] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489920#comment-13489920 ] Hadoop QA commented on HBASE-7088: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551959/HBASE-7088.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 85 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3216//console This message is automatically generated. > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Attachments: HBASE-7088.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > {code} > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > {code} > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6796) Backport HBASE-5547, Don't delete HFiles in backup mode.
[ https://issues.apache.org/jira/browse/HBASE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489919#comment-13489919 ] Lars Hofhansl commented on HBASE-6796: -- In trunk there no 2000ms annotation on that test. > Backport HBASE-5547, Don't delete HFiles in backup mode. > > > Key: HBASE-6796 > URL: https://issues.apache.org/jira/browse/HBASE-6796 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Jesse Yates > Fix For: 0.94.3 > > Attachments: hbase-5547-0.94-backport-v0.patch, hbase-6796-v0.patch, > hbase-6796-v1.patch, hbase-6796-v2.patch > > > See HBASE-5547 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6796) Backport HBASE-5547, Don't delete HFiles in backup mode.
[ https://issues.apache.org/jira/browse/HBASE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489918#comment-13489918 ] Lars Hofhansl commented on HBASE-6796: -- Why is that test marked with a 2000ms timeout? Seems a bit short, it's doing a bunch of file manipulation. I am going to remove that annotation in favor of the default as an addendum. > Backport HBASE-5547, Don't delete HFiles in backup mode. > > > Key: HBASE-6796 > URL: https://issues.apache.org/jira/browse/HBASE-6796 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Jesse Yates > Fix For: 0.94.3 > > Attachments: hbase-5547-0.94-backport-v0.patch, hbase-6796-v0.patch, > hbase-6796-v1.patch, hbase-6796-v2.patch > > > See HBASE-5547 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6796) Backport HBASE-5547, Don't delete HFiles in backup mode.
[ https://issues.apache.org/jira/browse/HBASE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489914#comment-13489914 ] Lars Hofhansl commented on HBASE-6796: -- TestHFileCleaner.testHFileCleaning has been failing in every build since the checkin. We have three options: # fix the test # disable the test # revert the change I'd be reluctant to do #3. At the same I cannot release the next RC with a consistently failing test. > Backport HBASE-5547, Don't delete HFiles in backup mode. > > > Key: HBASE-6796 > URL: https://issues.apache.org/jira/browse/HBASE-6796 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Jesse Yates > Fix For: 0.94.3 > > Attachments: hbase-5547-0.94-backport-v0.patch, hbase-6796-v0.patch, > hbase-6796-v1.patch, hbase-6796-v2.patch > > > See HBASE-5547 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2]
[ https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489913#comment-13489913 ] Lars Hofhansl commented on HBASE-6330: -- Sure... But anyway, do you have a fix for this? Happy to pull it back. > TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2] > - > > Key: HBASE-6330 > URL: https://issues.apache.org/jira/browse/HBASE-6330 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.94.1, 0.96.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Labels: hadoop-2.0 > Fix For: 0.96.0 > > Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch, > hbase-6330-v2.patch > > > See HBASE-5876. I'm going to commit the v3 patches under this name since > there has been two months (my bad) since the first half was committed and > found to be incomplte. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489912#comment-13489912 ] Lars Hofhansl commented on HBASE-4583: -- Typically the client will retry an operation. That can happen for example when a region moved or is moving. In that case an Increment will just fail, whereas a Put/Delete will be transparently retried by a client. As I said, we can make Increment idempotent too (in a sense) by having the client send a token along and then verifying (somehow, waving hands here) that token at the server to apply the Increment at most once. > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7092) RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader
[ https://issues.apache.org/jira/browse/HBASE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489908#comment-13489908 ] stack commented on HBASE-7092: -- You have a heap dump Jimmy that we could take a look at ? What jdk1.7 version? > RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader > --- > > Key: HBASE-7092 > URL: https://issues.apache.org/jira/browse/HBASE-7092 > Project: HBase > Issue Type: Sub-task >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.0 > > > One instance of "java.util.concurrent.ConcurrentHashMap" loaded by " class loader>" occupies 3,972,154,848 (92.88%) bytes. The instance is > referenced by org.apache.hadoop.hbase.regionserver.HRegionServer @ > 0x7038d3798 , loaded by "sun.misc.Launcher$AppClassLoader @ 0x703994668". The > memory is accumulated in one instance of > "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by " loader>". > Keywords > sun.misc.Launcher$AppClassLoader @ 0x703994668 > java.util.concurrent.ConcurrentHashMap > java.util.concurrent.ConcurrentHashMap$Segment[] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489905#comment-13489905 ] Ted Yu commented on HBASE-7088: --- +1 on patch. > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Attachments: HBASE-7088.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > {code} > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > {code} > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7088: --- Description: On the RowCounter mapreduce class, there is a "scan.setFilter(new FirstKeyOnlyFilter());" statement which is not required at line 125 since we have this on line 141: {code} if (qualifiers.size() == 0) { scan.setFilter(new FirstKeyOnlyFilter()); } else { scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); } {code} Should the line 125 simply be removed? was: On the RowCounter mapreduce class, there is a "scan.setFilter(new FirstKeyOnlyFilter());" statement which is not required at line 125 since we have this on line 141: {code} if (qualifiers.size() == 0) { scan.setFilter(new FirstKeyOnlyFilter()); } else { scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); } {/code} Should the line 125 simply be removed? > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Attachments: HBASE-7088.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > {code} > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > {code} > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7088: --- Description: On the RowCounter mapreduce class, there is a "scan.setFilter(new FirstKeyOnlyFilter());" statement which is not required at line 125 since we have this on line 141: {code} if (qualifiers.size() == 0) { scan.setFilter(new FirstKeyOnlyFilter()); } else { scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); } {/code} Should the line 125 simply be removed? was: On the RowCounter mapreduce class, there is a "scan.setFilter(new FirstKeyOnlyFilter());" statement which is not required at line 125 since we have this on line 141: if (qualifiers.size() == 0) { scan.setFilter(new FirstKeyOnlyFilter()); } else { scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); } Should the line 125 simply be removed? > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Attachments: HBASE-7088.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > {code} > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > {/code} > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2]
[ https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489894#comment-13489894 ] Himanshu Vashishtha commented on HBASE-6330: please ignore last comment :) > TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2] > - > > Key: HBASE-6330 > URL: https://issues.apache.org/jira/browse/HBASE-6330 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.94.1, 0.96.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Labels: hadoop-2.0 > Fix For: 0.96.0 > > Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch, > hbase-6330-v2.patch > > > See HBASE-5876. I'm going to commit the v3 patches under this name since > there has been two months (my bad) since the first half was committed and > found to be incomplte. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7088: --- Status: Patch Available (was: Open) > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Attachments: HBASE-7088.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7088: --- Attachment: HBASE-7088.patch Removal of a small duplicate code. > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Attachments: HBASE-7088.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-7093) Couple Increments/Appends with Put/Delete(s)
[ https://issues.apache.org/jira/browse/HBASE-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Sharma reassigned HBASE-7093: --- Assignee: Varun Sharma > Couple Increments/Appends with Put/Delete(s) > > > Key: HBASE-7093 > URL: https://issues.apache.org/jira/browse/HBASE-7093 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.96.0 >Reporter: Varun Sharma >Assignee: Varun Sharma > > See related issue - https://issues.apache.org/jira/browse/HBASE-4583 > Currently, we cannot bundle increment/append with put/delete operations. The > above JIRA MVCC'izes the increment/append operations. > One issue is that increment(s)/append(s) are not idempotent and hence > repeating the transaction has an associated issue of leading to incorrect > value/append results. This could be solved by passing additional tokens as > part of the append(s). > One possible high level approach could be: > 1) Class IncrementMutation which inherits from Increment and Mutation > 2) In the mutateRow call, we add a case for "IncrementMutation" object > 3) Factor out the code wrapped inside the "lock and MVCC" from increment() > function to internalIncrement. > 4) Call internalIncrement from mutateRow and increment() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5498) Secure Bulk Load
[ https://issues.apache.org/jira/browse/HBASE-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489881#comment-13489881 ] Francis Liu commented on HBASE-5498: Thanks Ted, I forgot to put up the reviewboard link: https://reviews.apache.org/r/7849/ > Secure Bulk Load > > > Key: HBASE-5498 > URL: https://issues.apache.org/jira/browse/HBASE-5498 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Francis Liu >Assignee: Francis Liu > Fix For: 0.96.0, 0.94.4 > > Attachments: HBASE-5498_94_2.patch, HBASE-5498_94_3.patch, > HBASE-5498_94.patch, HBASE-5498_94.patch, HBASE-5498_draft_94.patch, > HBASE-5498_draft.patch, HBASE-5498_trunk_2.patch, HBASE-5498_trunk_3.patch, > HBASE-5498_trunk_4.patch, HBASE-5498_trunk.patch > > > Design doc: > https://cwiki.apache.org/confluence/display/HCATALOG/HBase+Secure+Bulk+Load > Short summary: > Security as it stands does not cover the bulkLoadHFiles() feature. Users > calling this method will bypass ACLs. Also loading is made more cumbersome in > a secure setting because of hdfs privileges. bulkLoadHFiles() moves the data > from user's directory to the hbase directory, which would require certain > write access privileges set. > Our solution is to create a coprocessor which makes use of AuthManager to > verify if a user has write access to the table. If so, launches a MR job as > the hbase user to do the importing (ie rewrite from text to hfiles). One > tricky part this job will have to do is impersonate the calling user when > reading the input files. We can do this by expecting the user to pass an hdfs > delegation token as part of the secureBulkLoad() coprocessor call and extend > an inputformat to make use of that token. The output is written to a > temporary directory accessible only by hbase and then bulkloadHFiles() is > called. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489869#comment-13489869 ] Varun Sharma commented on HBASE-4583: - Okay - created JIRA https://issues.apache.org/jira/browse/HBASE-7093 This might sound like a naive question though but why are mutations required to be idempotent - is it so that their result is always gauranteed (feel free to discuss over the other JIRA) ? > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7093) Couple Increments/Appends with Put/Delete(s)
Varun Sharma created HBASE-7093: --- Summary: Couple Increments/Appends with Put/Delete(s) Key: HBASE-7093 URL: https://issues.apache.org/jira/browse/HBASE-7093 Project: HBase Issue Type: Improvement Affects Versions: 0.96.0 Reporter: Varun Sharma See related issue - https://issues.apache.org/jira/browse/HBASE-4583 Currently, we cannot bundle increment/append with put/delete operations. The above JIRA MVCC'izes the increment/append operations. One issue is that increment(s)/append(s) are not idempotent and hence repeating the transaction has an associated issue of leading to incorrect value/append results. This could be solved by passing additional tokens as part of the append(s). One possible high level approach could be: 1) Class IncrementMutation which inherits from Increment and Mutation 2) In the mutateRow call, we add a case for "IncrementMutation" object 3) Factor out the code wrapped inside the "lock and MVCC" from increment() function to internalIncrement. 4) Call internalIncrement from mutateRow and increment() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489868#comment-13489868 ] Francis Liu commented on HBASE-7066: Scratch that, the return class would come from the coprocessor framework so it is core. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489867#comment-13489867 ] Francis Liu commented on HBASE-7066: return vs exceptions: The problem with both is that it'll leak security code into core. Though with exceptions there's a workaround with inheriting core classes/interfaces. Which makes Stack's suggestion much more viable. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields
[ https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489865#comment-13489865 ] Ted Yu commented on HBASE-6852: --- @Cheng: The build failure might be due to other reasons. Check back in a day or two. > SchemaMetrics.updateOnCacheHit costs too much while full scanning a table > with all of its fields > > > Key: HBASE-6852 > URL: https://issues.apache.org/jira/browse/HBASE-6852 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 0.94.0 >Reporter: Cheng Hao >Assignee: Cheng Hao >Priority: Minor > Labels: performance > Fix For: 0.94.3 > > Attachments: 6852-0.94_2.patch, 6852-0.94_3.patch, 6852-0.94.txt, > metrics_hotspots.png, onhitcache-trunk.patch > > > The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full > table scanning. > Here is the top 5 hotspots within regionserver while full scanning a table: > (Sorry for the less-well-format) > CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated) > Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit > mask of 0x00 (No unit mask) count 500 > samples %image name symbol name > --- > 9844713.4324 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) > 98447100.000 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) [self] > --- > 45814 6.2510 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) > 45814100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) [self] > --- > 43523 5.9384 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > 43523100.000 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > [self] > --- > 42548 5.8054 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) > 42548100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) [self] > --- > 40572 5.5358 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 > 40572100.000 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7092) RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader
[ https://issues.apache.org/jira/browse/HBASE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489866#comment-13489866 ] Jimmy Xiang commented on HBASE-7092: Started a 4 nodes cluster with JDK1.7. Ran ycsb for a short time and all regionserver died due to OOM. > RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader > --- > > Key: HBASE-7092 > URL: https://issues.apache.org/jira/browse/HBASE-7092 > Project: HBase > Issue Type: Sub-task >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.0 > > > One instance of "java.util.concurrent.ConcurrentHashMap" loaded by " class loader>" occupies 3,972,154,848 (92.88%) bytes. The instance is > referenced by org.apache.hadoop.hbase.regionserver.HRegionServer @ > 0x7038d3798 , loaded by "sun.misc.Launcher$AppClassLoader @ 0x703994668". The > memory is accumulated in one instance of > "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by " loader>". > Keywords > sun.misc.Launcher$AppClassLoader @ 0x703994668 > java.util.concurrent.ConcurrentHashMap > java.util.concurrent.ConcurrentHashMap$Segment[] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7092) RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader
Jimmy Xiang created HBASE-7092: -- Summary: RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader Key: HBASE-7092 URL: https://issues.apache.org/jira/browse/HBASE-7092 Project: HBase Issue Type: Sub-task Reporter: Jimmy Xiang One instance of "java.util.concurrent.ConcurrentHashMap" loaded by "" occupies 3,972,154,848 (92.88%) bytes. The instance is referenced by org.apache.hadoop.hbase.regionserver.HRegionServer @ 0x7038d3798 , loaded by "sun.misc.Launcher$AppClassLoader @ 0x703994668". The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "". Keywords sun.misc.Launcher$AppClassLoader @ 0x703994668 java.util.concurrent.ConcurrentHashMap java.util.concurrent.ConcurrentHashMap$Segment[] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-7092) RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader
[ https://issues.apache.org/jira/browse/HBASE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang reassigned HBASE-7092: -- Assignee: Jimmy Xiang > RegionServer OOM with jdk 1.7 related to ConcurrentHashMap class loader > --- > > Key: HBASE-7092 > URL: https://issues.apache.org/jira/browse/HBASE-7092 > Project: HBase > Issue Type: Sub-task >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang > Fix For: 0.96.0 > > > One instance of "java.util.concurrent.ConcurrentHashMap" loaded by " class loader>" occupies 3,972,154,848 (92.88%) bytes. The instance is > referenced by org.apache.hadoop.hbase.regionserver.HRegionServer @ > 0x7038d3798 , loaded by "sun.misc.Launcher$AppClassLoader @ 0x703994668". The > memory is accumulated in one instance of > "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by " loader>". > Keywords > sun.misc.Launcher$AppClassLoader @ 0x703994668 > java.util.concurrent.ConcurrentHashMap > java.util.concurrent.ConcurrentHashMap$Segment[] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields
[ https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489861#comment-13489861 ] Cheng Hao commented on HBASE-6852: -- Ouch!Still failed,and I still couldn't access the build server. Any problem of the build server? > SchemaMetrics.updateOnCacheHit costs too much while full scanning a table > with all of its fields > > > Key: HBASE-6852 > URL: https://issues.apache.org/jira/browse/HBASE-6852 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 0.94.0 >Reporter: Cheng Hao >Assignee: Cheng Hao >Priority: Minor > Labels: performance > Fix For: 0.94.3 > > Attachments: 6852-0.94_2.patch, 6852-0.94_3.patch, 6852-0.94.txt, > metrics_hotspots.png, onhitcache-trunk.patch > > > The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full > table scanning. > Here is the top 5 hotspots within regionserver while full scanning a table: > (Sorry for the less-well-format) > CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated) > Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit > mask of 0x00 (No unit mask) count 500 > samples %image name symbol name > --- > 9844713.4324 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) > 98447100.000 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) [self] > --- > 45814 6.2510 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) > 45814100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) [self] > --- > 43523 5.9384 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > 43523100.000 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > [self] > --- > 42548 5.8054 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) > 42548100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) [self] > --- > 40572 5.5358 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 > 40572100.000 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5498) Secure Bulk Load
[ https://issues.apache.org/jira/browse/HBASE-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489860#comment-13489860 ] Ted Yu commented on HBASE-5498: --- Comment for trunk patch: I agree that HTable.coprocessorService doesn't need to throw Throwable. If you don't want to use the try/catch block, you can follow the example of AggregationClient: {code} public R max(final byte[] tableName, final ColumnInterpreter ci, final Scan scan) throws Throwable { {code} {code} + */ +public class SecureBulkLoadClient { {code} Please add annotation for new public classes. For LoadIncrementalHFiles: {code} + private boolean useSecure; {code} Should the above field be named useSecurity or secureLoad ? {code} + if(useSecure) { {code} nit: insert space between if and (. {code} + } catch (InterruptedException e) { +LOG.warn("Failed to cancel HDFS delegation token.", e); + } {code} Please restore interrupt status above. More reviews to follow. > Secure Bulk Load > > > Key: HBASE-5498 > URL: https://issues.apache.org/jira/browse/HBASE-5498 > Project: HBase > Issue Type: Improvement > Components: security >Reporter: Francis Liu >Assignee: Francis Liu > Fix For: 0.96.0, 0.94.4 > > Attachments: HBASE-5498_94_2.patch, HBASE-5498_94_3.patch, > HBASE-5498_94.patch, HBASE-5498_94.patch, HBASE-5498_draft_94.patch, > HBASE-5498_draft.patch, HBASE-5498_trunk_2.patch, HBASE-5498_trunk_3.patch, > HBASE-5498_trunk_4.patch, HBASE-5498_trunk.patch > > > Design doc: > https://cwiki.apache.org/confluence/display/HCATALOG/HBase+Secure+Bulk+Load > Short summary: > Security as it stands does not cover the bulkLoadHFiles() feature. Users > calling this method will bypass ACLs. Also loading is made more cumbersome in > a secure setting because of hdfs privileges. bulkLoadHFiles() moves the data > from user's directory to the hbase directory, which would require certain > write access privileges set. > Our solution is to create a coprocessor which makes use of AuthManager to > verify if a user has write access to the table. If so, launches a MR job as > the hbase user to do the importing (ie rewrite from text to hfiles). One > tricky part this job will have to do is impersonate the calling user when > reading the input files. We can do this by expecting the user to pass an hdfs > delegation token as part of the secureBulkLoad() coprocessor call and extend > an inputformat to make use of that token. The output is written to a > temporary directory accessible only by hbase and then bulkloadHFiles() is > called. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7091) support custom GC options in hbase-env.sh
Jesse Yates created HBASE-7091: -- Summary: support custom GC options in hbase-env.sh Key: HBASE-7091 URL: https://issues.apache.org/jira/browse/HBASE-7091 Project: HBase Issue Type: Bug Components: scripts Reporter: Jesse Yates When running things like bin/start-hbase and bin/hbase-daemon.sh start [master|regionserver|etc] we end up setting HBASE_OPTS property a couple times via calling hbase-env.sh. This is generally not a problem for most cases, but when you want to set your own GC log properties, one would think you should set HBASE_GC_OPTS, which get added to HBASE_OPTS. NOPE! That would make too much sense. Running bin/hbase-daemons.sh will run bin/hbase-daemon.sh with the daemons it needs to start. Each time through hbase-daemon.sh we also call bin/hbase. This isn't a big deal except for each call to hbase-daemon.sh, we also source hbase-env.sh twice (once in the script and once in bin/hbase). This is important for my next point. Note that to turn on GC logging, you uncomment: {code} # export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps $HBASE_GC_OPTS" {code} and then to log to a gc file for each server, you then uncomment: {code} # export HBASE_USE_GC_LOGFILE=true {code} in hbase-env.sh On the first pass through hbase-daemon.sh, HBASE_GC_OPTS isn't set, so HBASE_OPTS doesn't get anything funky, but we set HBASE_USE_GC_LOGFILE, which then sets HBASE_GC_OPTS to the log file (-Xloggc:...). Then in bin/hbase we again run hbase-env.sh, which now hs HBASE_GC_OPTS set, adding the GC file. This isn't a general problem because HBASE_OPTS is set without prefixing the existing HBASE_OPTS (eg. HBASE_OPTS="$HBASE_OPTS ..."), allowing easy updating. However, GC OPTS don't work the same and this is really odd behavior when you want to set your own GC opts, which can include turning on GC log rolling (yes, yes, they really are jvm opts, but they ought to support their own param, to help minimize clutter). The simple version of this patch will just add an idempotent GC option to hbase-env.sh and some comments that uncommenting {code} # export HBASE_USE_GC_LOGFILE=true {code} will lead to a custom gc log file per server (along with an example name), so you don't need to set "-Xloggc". The more complex solution does the above and also solves the multiple calls to hbase-env.sh so we can be sane about how all this works. Note that to fix this, hbase-daemon.sh just needs to read in HBASE_USE_GC_LOGFILE after sourcing hbase-env.sh and then update HBASE_OPTS. Oh and also not source hbase-env.sh in bin/hbase. Even further, we might want to consider adding options just for cases where we don't need gc logging - i.e. the shell, the config reading tool, hcbk, etc. This is the hardest version to handle since the first couple will willy-nilly apply the gc options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6410) Move RegionServer Metrics to metrics2
[ https://issues.apache.org/jira/browse/HBASE-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489856#comment-13489856 ] Elliott Clark commented on HBASE-6410: -- Doesn't look like hadoop QA is coming back. I tested the TestMiniClusterLoadSequential locally and it passed 5 times in a row. TestHFileOutputFormat is currently failing for me on trunk. > Move RegionServer Metrics to metrics2 > - > > Key: HBASE-6410 > URL: https://issues.apache.org/jira/browse/HBASE-6410 > Project: HBase > Issue Type: Sub-task > Components: metrics >Affects Versions: 0.96.0 >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Blocker > Attachments: HBASE-6410-13.patch, HBASE-6410-15.patch, > HBASE-6410-16.patch, HBASE-6410-1.patch, HBASE-6410-2.patch, > HBASE-6410-3.patch, HBASE-6410-4.patch, HBASE-6410-5.patch, > HBASE-6410-6.patch, HBASE-6410.patch > > > Move RegionServer Metrics to metrics2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2]
[ https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489855#comment-13489855 ] Himanshu Vashishtha commented on HBASE-6330: [~lhofhansl] Thanks for the review. As mentioned in earlier comments, this fix is required in 0.94.x branch only. Trunk is good without it. > TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2] > - > > Key: HBASE-6330 > URL: https://issues.apache.org/jira/browse/HBASE-6330 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.94.1, 0.96.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Labels: hadoop-2.0 > Fix For: 0.96.0 > > Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch, > hbase-6330-v2.patch > > > See HBASE-5876. I'm going to commit the v3 patches under this name since > there has been two months (my bad) since the first half was committed and > found to be incomplte. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489854#comment-13489854 ] Lars Hofhansl commented on HBASE-4583: -- What I meant with the fourth issue is that if a RowMutation contains an Increment/Append it is no longer repeatable. Yes, let's move this into a different jira, probably against 0.96 only. > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6335) Switching log-splitting policy after last failure master start may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6335: - Fix Version/s: (was: 0.94.3) 0.94.4 > Switching log-splitting policy after last failure master start may cause data > loss > -- > > Key: HBASE-6335 > URL: https://issues.apache.org/jira/browse/HBASE-6335 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.92.1, 0.94.0 >Reporter: chunhui shen >Assignee: chunhui shen > Fix For: 0.94.4 > > > How happen? > If server A is down, and it has three log files, all the data is from one > region. > File 1: kv01 kv02 kv03 > File 2: kv04 kv05 kv06 > File 3: kv07 kv08 kv09 > Here,kv01 means, its log seqID is 01 > Case:Switch to maste-local-log-splitting from distributed-log-splitting > 1.Master find serverA is down, and start to split its log files using > split-log-splitting. > 2.Successfully split log file2, and move it to oldLogs, and generate one edit > file named 06 in region recover.edits dir. > 3.Master restart, and change the log-splitting policy to > maste-local-log-splitting , and start to split file 1, file 3 > 4.Successfully split log file1 and file3, and generate one edit file named 09 > in region recover.edits dir. > 5.Region replay edits from edit file 06 and 09, Region's seqID is 06 after it > replay edits from 06, and when replaying edit from 09, it will skip > kv01,kv02,kv03, So these data loss. > As the above case, if we switch to distributed-log-splitting from > maste-local-log-splitting, it could also cause data loss > Should we fix this bug or avoid the case? I'm not sure... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6335) Switching log-splitting policy after last failure master start may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6335: - No patch... Moving to 0.94.4 > Switching log-splitting policy after last failure master start may cause data > loss > -- > > Key: HBASE-6335 > URL: https://issues.apache.org/jira/browse/HBASE-6335 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 0.92.1, 0.94.0 >Reporter: chunhui shen >Assignee: chunhui shen > Fix For: 0.94.4 > > > How happen? > If server A is down, and it has three log files, all the data is from one > region. > File 1: kv01 kv02 kv03 > File 2: kv04 kv05 kv06 > File 3: kv07 kv08 kv09 > Here,kv01 means, its log seqID is 01 > Case:Switch to maste-local-log-splitting from distributed-log-splitting > 1.Master find serverA is down, and start to split its log files using > split-log-splitting. > 2.Successfully split log file2, and move it to oldLogs, and generate one edit > file named 06 in region recover.edits dir. > 3.Master restart, and change the log-splitting policy to > maste-local-log-splitting , and start to split file 1, file 3 > 4.Successfully split log file1 and file3, and generate one edit file named 09 > in region recover.edits dir. > 5.Region replay edits from edit file 06 and 09, Region's seqID is 06 after it > replay edits from 06, and when replaying edit from 09, it will skip > kv01,kv02,kv03, So these data loss. > As the above case, if we switch to distributed-log-splitting from > maste-local-log-splitting, it could also cause data loss > Should we fix this bug or avoid the case? I'm not sure... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6307) Fix hbase unit tests running on hadoop 2.0
[ https://issues.apache.org/jira/browse/HBASE-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6307: - Fix Version/s: (was: 0.94.3) Unassigning form 0.94 due to lack of movement. Please pull back if you disagree. > Fix hbase unit tests running on hadoop 2.0 > -- > > Key: HBASE-6307 > URL: https://issues.apache.org/jira/browse/HBASE-6307 > Project: HBase > Issue Type: Sub-task >Reporter: Jonathan Hsieh > Fix For: 0.96.0 > > > This is an umbrella issue for fixing unit tests and hbase builds form 0.92+ > on top of hadoop 0.23 (currently 0.92/0.94) and hadoop 2.0.x (trunk/0.96). > Once these are up and passing properly, we'll close out the umbrella issue by > adding hbase-trunk-on-hadoop-2 build to the hadoopqa bot. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6330) TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2]
[ https://issues.apache.org/jira/browse/HBASE-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6330: - Fix Version/s: (was: 0.94.3) Removing from 0.94. > TestImportExport has been failing against hadoop 0.23/2.0 profile [Part2] > - > > Key: HBASE-6330 > URL: https://issues.apache.org/jira/browse/HBASE-6330 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.94.1, 0.96.0 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Labels: hadoop-2.0 > Fix For: 0.96.0 > > Attachments: hbase-6330-94.patch, hbase-6330-trunk.patch, > hbase-6330-v2.patch > > > See HBASE-5876. I'm going to commit the v3 patches under this name since > there has been two months (my bad) since the first half was committed and > found to be incomplte. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6305) TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds.
[ https://issues.apache.org/jira/browse/HBASE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489847#comment-13489847 ] Lars Hofhansl commented on HBASE-6305: -- [~v.himanshu] I see. Fair enough. We can always revisit this later. +1 on patch as it fixes the issue at hand. > TestLocalHBaseCluster hangs with hadoop 2.0/0.23 builds. > > > Key: HBASE-6305 > URL: https://issues.apache.org/jira/browse/HBASE-6305 > Project: HBase > Issue Type: Sub-task > Components: test >Affects Versions: 0.92.2, 0.94.1 >Reporter: Jonathan Hsieh >Assignee: Jonathan Hsieh > Fix For: 0.94.3 > > Attachments: hbase-6305-94.patch, HBASE-6305-94-v2.patch, > HBASE-6305-94-v2.patch, HBASE-6305-v1.patch > > > trunk: mvn clean test -Dhadoop.profile=2.0 -Dtest=TestLocalHBaseCluster > 0.94: mvn clean test -Dhadoop.profile=23 -Dtest=TestLocalHBaseCluster > {code} > testLocalHBaseCluster(org.apache.hadoop.hbase.TestLocalHBaseCluster) Time > elapsed: 0.022 sec <<< ERROR! > java.lang.RuntimeException: Master not initialized after 200 seconds > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:208) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:424) > at > org.apache.hadoop.hbase.TestLocalHBaseCluster.testLocalHBaseCluster(TestLocalHBaseCluster.java:66) > ... > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489846#comment-13489846 ] Varun Sharma commented on HBASE-4583: - Agreed - this is not simple. In fact, the complexity within this JIRA itself shows that MVCC'ing the increment has itself issues. Thanks for pointing out the above concerns. Lets forget concern #4 since that is an issue irrespective of whether the increment is called within a mutation or outside a mutation, you can have increments applied twice, thrice etc. I was thinking that we could potentially do this in steps. 1) Class IncrementMutation which inherits from Increment and Mutation 2) In the mutateRow call, we add a case for "IncrementMutation" object 3) Factor out the code wrapped inside the "lock and MVCC" from increment() function to internalIncrement. 4) Call internalIncrement from mutateRow and increment() It seems that walEdits for Increment and Append are indeed put(s) - so that might be okay. Shall I move this discussion over to another JIRA ? > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489840#comment-13489840 ] Hadoop QA commented on HBASE-7089: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551933/HBASE-7089_trunk.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 85 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3215//console This message is automatically generated. > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489834#comment-13489834 ] Lars Hofhansl commented on HBASE-4583: -- RowMutation is currently limited to Puts and Deletes. Generalizing this is not trivial: * You have make all the changes to the WAL first, sync the WAL, then change the memstore * Potentially you want to release the row lock before the WAL-sync, which mean a rollback phase to the memstore if the WAL sync failed, etc. * Puts and Deletes only need snapshot isolation for consistency, whereas Increment and Append need to be serializable. * Put/Delete/etc are idempotent (client can retry on error) whereas Incement/Append generally aren't (we could make so by passing tokens along). Long way of saying: It's possible, but maybe not as simple as might imagine. :) > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489830#comment-13489830 ] Aditya Kishore commented on HBASE-7089: --- Trying to figure out how to run ruby tests. Will attach the updated patch with the tests (for both get and scan) soon. > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7042) Master Coprocessor Endpoint
[ https://issues.apache.org/jira/browse/HBASE-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489829#comment-13489829 ] Andrew Purtell commented on HBASE-7042: --- bq. Apart from being a bit clunky I made them separate so that each class can evolve without master/region usages stepping/confusing on each other. If you think this is ok I'm fine with reusing the Exec and ExecResult. Sounds great. bq. I believe your concern can be addressed by making system coprocessors reloadable. Which I think we should do for both master and region coprocessors anyway. This we can address in a separate jira? Ok. > Master Coprocessor Endpoint > --- > > Key: HBASE-7042 > URL: https://issues.apache.org/jira/browse/HBASE-7042 > Project: HBase > Issue Type: Sub-task >Reporter: Francis Liu >Assignee: Francis Liu > Fix For: 0.96.0 > > Attachments: HBASE-7042_94.patch > > > Having support for a master coprocessor endpoint would enable developers to > easily extended HMaster functionality/features. As is the case for region > server grouping. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489826#comment-13489826 ] Ted Yu commented on HBASE-7089: --- I looked at hbase-server/src/test/ruby/hbase/table_test.rb where tests for scan don't cover Filter. Adding test for Filter would be nice. But I am fine without additional test(s). > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489827#comment-13489827 ] Varun Sharma commented on HBASE-4583: - Hi folks, Firstly, awesome to have this patch. Thanks, lars and everyone. We have been trying to build a consistent application on top of hbase which also works well with counters - consider a user and all his liked "pins" or "tweets". Row is User Id and liked pins are column qualifiers. Also, there is a column maintaining the "count". For maintaining consistent data, it would be nice if we could do the following - write a like and increment the count and delete a like and decrement the count. I am wondering if we can now couple "put(s)" and "increment(s)" or "delete(s)" and "decrement(s)" from the client side, as part of a single row mutation and ensure a consistent table view for the user in the above application (it will be of course be a separate JIRA, though). Thanks Varun > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489825#comment-13489825 ] Andrew Purtell commented on HBASE-7066: --- bq. Passing information out of methods via exceptions is always a bit weird. Maybe in the general case but throwing an access denied exception if a security policy check seems natural, common, and unsurprising to me. bq. Can we instead make the coprocessor protocol be more explicit? One early thought was an Enum instead of a Boolean. We went with "simpler is better". Changing this internal framework detail if we need it now seems reasonable to me. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489825#comment-13489825 ] Andrew Purtell edited comment on HBASE-7066 at 11/2/12 11:09 PM: - bq. Passing information out of methods via exceptions is always a bit weird. Maybe in the general case but throwing an access denied exception if a security policy fails check seems natural, common, and unsurprising to me. bq. Can we instead make the coprocessor protocol be more explicit? One early thought was an Enum instead of a Boolean. We went with "simpler is better". Changing this internal framework detail if we need it now seems reasonable to me. was (Author: apurtell): bq. Passing information out of methods via exceptions is always a bit weird. Maybe in the general case but throwing an access denied exception if a security policy check seems natural, common, and unsurprising to me. bq. Can we instead make the coprocessor protocol be more explicit? One early thought was an Enum instead of a Boolean. We went with "simpler is better". Changing this internal framework detail if we need it now seems reasonable to me. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489824#comment-13489824 ] Andrew Purtell commented on HBASE-7066: --- bq. Should we have a marker Interface 'DisallowedException' or a base IOE Exception DisallowedException that ADE implements and then in core we'd check for this Missed this one. That would be good. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aditya Kishore updated HBASE-7089: -- Attachment: HBASE-7089_trunk_v2.patch Thanks for catching that Stack. Missed while creating the patch. > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch, HBASE-7089_trunk_v2.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489820#comment-13489820 ] Andrew Purtell commented on HBASE-7066: --- bq. Have we ever considered making security first class in 0.96 (not a coprocessor, though still switchable)? I'm not in favor of this approach unless we want HBASE-6222, specifically the KV labeling part, in which case the changes are so invasive anyway we might as well move everything into core and furthermore reimplement access control on top of labeling. bq. Change stopMaster() and shutdown() signature with "throws AccessDeniedException" I think it will be a little weird to have these two methods throw a more specific signature than IOE where everywhere else we have IOE. The larger issue of (ab)use of IOE is a major refactoring. Also, I think AccessDeniedException should remain in the security package until security is otherwise not encapsulated there. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489821#comment-13489821 ] Ted Yu commented on HBASE-7086: --- Integrated addendum for 0.94 to 0.94 branch. Thanks for the review, Lars. > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, 7086-trunk-v3.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-7090) HMaster#splitLogAndExpireIfOnline actually does it if the server is NOT online
[ https://issues.apache.org/jira/browse/HBASE-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang resolved HBASE-7090. Resolution: Invalid The code is right actually. Invalid issue. > HMaster#splitLogAndExpireIfOnline actually does it if the server is NOT online > -- > > Key: HBASE-7090 > URL: https://issues.apache.org/jira/browse/HBASE-7090 > Project: HBase > Issue Type: Bug > Components: master, Region Assignment >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > > This has been there for some time. It is a surprise to find it is reversed > actually. Did I missing anything? > I think it really means to do it if the server is online. > {noformat} > private void splitLogAndExpireIfOnline(final ServerName sn) > throws IOException { > if (sn == null || !serverManager.isServerOnline(sn)) { > return; > } > LOG.info("Forcing splitLog and expire of " + sn); > fileSystemManager.splitLog(sn); > serverManager.expireServer(sn); > } > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489815#comment-13489815 ] Ted Yu commented on HBASE-7066: --- I concur with Stack's suggestion above @ 02/Nov/12 20:44 AccessDeniedException extends DoNotRetryIOException . Maybe we can introduce a new exception, CannotIgnoreException, which extends DoNotRetryIOException. Shutdown logic would check whether IOException thrown is an instance of CannotIgnoreException. If it is, shutdown is aborted. AccessDeniedException would extend CannotIgnoreException. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489812#comment-13489812 ] Francis Liu commented on HBASE-7066: Sounds like we need to change the return type from boolean to class to provide a richer response. This would require a large change to the coprocessor framework if we go this route. Can we come up with an interim solution for now? > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7090) HMaster#splitLogAndExpireIfOnline actually does it if the server is NOT online
Jimmy Xiang created HBASE-7090: -- Summary: HMaster#splitLogAndExpireIfOnline actually does it if the server is NOT online Key: HBASE-7090 URL: https://issues.apache.org/jira/browse/HBASE-7090 Project: HBase Issue Type: Bug Components: master, Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor This has been there for some time. It is a surprise to find it is reversed actually. Did I missing anything? I think it really means to do it if the server is online. {noformat} private void splitLogAndExpireIfOnline(final ServerName sn) throws IOException { if (sn == null || !serverManager.isServerOnline(sn)) { return; } LOG.info("Forcing splitLog and expire of " + sn); fileSystemManager.splitLog(sn); serverManager.expireServer(sn); } {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489811#comment-13489811 ] Francis Liu commented on HBASE-7066: How will we communicate to the user the reason behind the failure? > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7086: -- Attachment: 7086-trunk-v3.txt Trunk patch v3 illustrates how the stack trace is logged. Here is sample from test output: {code} 2012-11-02 15:46:35,429 INFO [main] hbase.ResourceChecker(147): before: master.cleaner.TestHFileCleaner#testTTLCleaner Thread=43, OpenFileDescriptor=145, MaxFileDescriptor=10240, ConnectionCount=0 2012-11-02 15:46:35,671 DEBUG [main] cleaner.TimeToLiveHFileCleaner(68): Life:106, ttl:100, current:1351896395669, from: 1351896395563 2012-11-02 15:46:35,673 INFO [main] hbase.ResourceChecker(171): after: master.cleaner.TestHFileCleaner#testTTLCleaner Thread=44 (was 43) Potentially hanging thread: LeaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1485) java.lang.Thread.run(Thread.java:680) - Thread LEAK? -, OpenFileDescriptor=145 (was 145), MaxFileDescriptor=10240 (was 10240), ConnectionCount=0 (was 0) 2012-11-02 15:46:35,674 INFO [main] hbase.ResourceChecker(147): before: master.cleaner.TestHFileCleaner#testHFileCleaning Thread=44, OpenFileDescriptor=145, MaxFileDescriptor=10240, ConnectionCount=0 {code} I am open to the naming of the new method in ResourceChecker.ResourceAnalyzer Please provide your comments. > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, 7086-trunk-v3.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489808#comment-13489808 ] stack commented on HBASE-7089: -- Should you update the help for Get too Aditya? > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489804#comment-13489804 ] Lars Hofhansl commented on HBASE-7066: -- Can we instead make the coprocessor protocol be more explicit? I.e. return true from the pre and post hooks means shutdown OK, false from either of these means do not shut down...? Passing information out of methods via exceptions is always a bit weird. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489803#comment-13489803 ] Jean-Marc Spaggiari commented on HBASE-7088: Sure I will. Will try tonight, else, this week-end. > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489788#comment-13489788 ] Francis Liu commented on HBASE-7066: That sounds fine to me, though what do we do with the shutdown() method signature? Since we can't do "throws "? > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aditya Kishore updated HBASE-7089: -- Fix Version/s: 0.96.0 Status: Patch Available (was: Open) Submitting patch for trunk. With this, get now accept filters. {noformat} hbase(main):004:0> get 't2', 'r1', {FILTER => "ValueFilter (=, 'binary:valueX')"} COLUMN CELL cf:c timestamp=1348831499385, value=valueX cf:t timestamp=1348831478941, value=valueX 2 row(s) in 11.0770 seconds {noformat} > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.96.0 > > Attachments: HBASE-7089_trunk.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7089) Allow filter to be specified for Get from HBase shell
[ https://issues.apache.org/jira/browse/HBASE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aditya Kishore updated HBASE-7089: -- Attachment: HBASE-7089_trunk.patch > Allow filter to be specified for Get from HBase shell > - > > Key: HBASE-7089 > URL: https://issues.apache.org/jira/browse/HBASE-7089 > Project: HBase > Issue Type: Improvement > Components: shell >Affects Versions: 0.96.0 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Attachments: HBASE-7089_trunk.patch > > > Unlike scan, get in HBase shell does not accept FILTER as an argument. > {noformat} > hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, > 'binary:valueX')"} > COLUMN CELL > ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash > Here is some help for this command: > ... > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7089) Allow filter to be specified for Get from HBase shell
Aditya Kishore created HBASE-7089: - Summary: Allow filter to be specified for Get from HBase shell Key: HBASE-7089 URL: https://issues.apache.org/jira/browse/HBASE-7089 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.96.0 Reporter: Aditya Kishore Assignee: Aditya Kishore Priority: Minor Unlike scan, get in HBase shell does not accept FILTER as an argument. {noformat} hbase(main):001:0> get 'table', 'row3', {FILTER => "ValueFilter (=, 'binary:valueX')"} COLUMN CELL ERROR: Failed parse of {"FILTER"=>"ValueFilter (=, 'binary:valueX')"}, Hash Here is some help for this command: ... {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7086: -- Attachment: 7086-trunk-v2.txt > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489774#comment-13489774 ] Ted Yu commented on HBASE-7086: --- Looks like I cannot access https://builds.apache.org/job/PreCommit-HBASE-Build/ at the moment. Recent Jenkins builds returned strange exception as well. My first trunk patch didn't work. There would be a lot of extraneous log such as the following: {code} NumberFormatException: 2012-11-02 14:23:11,497 DEBUG [pool-1-thread-1] backup.HFileArchiver(338): No existing file in archive for:/home/hduser/trunk/hbase-server/target/test-data/9e6c26d7-45f9-406f-87eb-a733231256ac/testWithMinVersions/.archive/testWithMinVersions/07e89fc98af6b9300cd5c8e4c19fa8d9/colfamily31/a3644ad438fd40f885b29959730c1fde, free to archive original file. NumberFormatException: 2012-11-02 14:23:11,497 DEBUG [pool-1-thread-1] backup.HFileArchiver(345): Finished archiving file from: class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:/home/hduser/trunk/hbase-server/target/test-data/9e6c26d7-45f9-406f-87eb-a733231256ac/testWithMinVersions/testWithMinVersions/07e89fc98af6b9300cd5c8e4c19fa8d9/colfamily31/a3644ad438fd40f885b29959730c1fde, to: /home/hduser/trunk/hbase-server/target/test-data/9e6c26d7-45f9-406f-87eb-a733231256ac/testWithMinVersions/.archive/testWithMinVersions/07e89fc98af6b9300cd5c8e4c19fa8d9/colfamily31/a3644ad438fd40f885b29959730c1fde {code} In trunk patch v2, I pass Log object to ResourceCheckerJUnitListener. It seems that the actual log couldn't be written when there is thread leak: {code} 2012-11-02 14:31:58,084 INFO [main] hbase.ResourceChecker(162): after: io.hfile.TestScannerSelectionUsingTTL#testScannerSelection[3] Thread=11 (was 10) - Thread LEAK? -, OpenFileDescriptor=104 (was 102) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), ConnectionCount=0 (was 0) 2012-11-02 14:31:58,085 INFO [main] hbase.ResourceChecker(144): before: io.hfile.TestScannerSelectionUsingTTL#testScannerSelection[4] Thread=11, OpenFileDescriptor=104, MaxFileDescriptor=10240, ConnectionCount=0 {code} My next step is to add method to ResourceChecker.ResourceAnalyzer which returns array of String so that ResourceChecker can log them. But I want to get N Keywal's input first. > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7086: -- Status: Open (was: Patch Available) > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > 7086-trunk-v2.txt, testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489771#comment-13489771 ] Hudson commented on HBASE-7086: --- Integrated in HBase-0.94 #567 (See [https://builds.apache.org/job/HBase-0.94/567/]) HBASE-7086 Enhance ResourceChecker to log stack trace for potentially hanging threads (Revision 1405081) Result = FAILURE tedyu : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitRule.java > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6852) SchemaMetrics.updateOnCacheHit costs too much while full scanning a table with all of its fields
[ https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489773#comment-13489773 ] Hudson commented on HBASE-6852: --- Integrated in HBase-0.94 #567 (See [https://builds.apache.org/job/HBase-0.94/567/]) HBASE-6852 RE-REAPPLY, Cheng worked tirelessly to fix the issues. (Revision 1405083) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java > SchemaMetrics.updateOnCacheHit costs too much while full scanning a table > with all of its fields > > > Key: HBASE-6852 > URL: https://issues.apache.org/jira/browse/HBASE-6852 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 0.94.0 >Reporter: Cheng Hao >Assignee: Cheng Hao >Priority: Minor > Labels: performance > Fix For: 0.94.3 > > Attachments: 6852-0.94_2.patch, 6852-0.94_3.patch, 6852-0.94.txt, > metrics_hotspots.png, onhitcache-trunk.patch > > > The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full > table scanning. > Here is the top 5 hotspots within regionserver while full scanning a table: > (Sorry for the less-well-format) > CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated) > Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit > mask of 0x00 (No unit mask) count 500 > samples %image name symbol name > --- > 9844713.4324 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) > 98447100.000 14033.jo void > org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory, > boolean) [self] > --- > 45814 6.2510 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) > 45814100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, > byte[], int, int) [self] > --- > 43523 5.9384 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > 43523100.000 14033.jo boolean > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue) > [self] > --- > 42548 5.8054 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) > 42548100.000 14033.jo int > org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, > byte[], int, int) [self] > --- > 40572 5.5358 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 > 40572100.000 14033.jo int > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[], > int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7087) Add to NOTICE.txt a note on jamon being MPL
[ https://issues.apache.org/jira/browse/HBASE-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489772#comment-13489772 ] Hudson commented on HBASE-7087: --- Integrated in HBase-0.94 #567 (See [https://builds.apache.org/job/HBase-0.94/567/]) HBASE-7087 Add to NOTICE.txt a note on jamon being MPL (Revision 1405074) Result = FAILURE stack : Files : * /hbase/branches/0.94/NOTICE.txt > Add to NOTICE.txt a note on jamon being MPL > --- > > Key: HBASE-7087 > URL: https://issues.apache.org/jira/browse/HBASE-7087 > Project: HBase > Issue Type: Task >Reporter: stack >Assignee: stack > Fix For: 0.94.3, 0.96.0 > > Attachments: 7087.txt > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7088) Duplicate code in RowCounter
[ https://issues.apache.org/jira/browse/HBASE-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489769#comment-13489769 ] Elliott Clark commented on HBASE-7088: -- Seems reasonable to me. Do you want to throw up a patch ? > Duplicate code in RowCounter > > > Key: HBASE-7088 > URL: https://issues.apache.org/jira/browse/HBASE-7088 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 0.94.2 >Reporter: Jean-Marc Spaggiari >Assignee: Jean-Marc Spaggiari >Priority: Minor > Labels: mapreduce > Original Estimate: 1h > Remaining Estimate: 1h > > On the RowCounter mapreduce class, there is a "scan.setFilter(new > FirstKeyOnlyFilter());" statement which is not required at line 125 since we > have this on line 141: > if (qualifiers.size() == 0) { > scan.setFilter(new FirstKeyOnlyFilter()); > } else { > scan.setFilter(new FirstKeyValueMatchingQualifiersFilter(qualifiers)); > } > Should the line 125 simply be removed? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7086) Enhance ResourceChecker to log stack trace for potentially hanging threads
[ https://issues.apache.org/jira/browse/HBASE-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489747#comment-13489747 ] Lars Hofhansl commented on HBASE-7086: -- +1 on addendum and trunk patch > Enhance ResourceChecker to log stack trace for potentially hanging threads > -- > > Key: HBASE-7086 > URL: https://issues.apache.org/jira/browse/HBASE-7086 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.94.3, 0.96.0 > > Attachments: 7086.94, 7086-94.addendum, 7086-trunk.txt, > testHFileCleaner.out > > > Currently ResourceChecker logs a line similar to the following if it detects > potential thread leak: > {code} > 2012-11-02 10:18:59,299 INFO [main] hbase.ResourceChecker(157): after > master.cleaner.TestHFileCleaner#testTTLCleaner: 44 threads (was 43), 145 file > descriptors (was 145). 0 connections, -thread leak?- > {code} > We should enhance the log to include stack trace of the potentially hanging > thread(s) > This work was motivated when I investigated test failure in HBASE-6796 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations
[ https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489743#comment-13489743 ] Lars Hofhansl commented on HBASE-4583: -- The logic in upsert could be changed to also count the number of versions (in addition to versions older than then the current readpoint) and then consider both counts before removing KVs. That way we get the current upsert logic (if you set VERSIONS => 1 for the CF) and also keep at least as many versions as declared in the CF. That's for another jira, though. OK. Any opposition to committing the "less radical" version to 0.96? > Integrate RWCC with Append and Increment operations > --- > > Key: HBASE-4583 > URL: https://issues.apache.org/jira/browse/HBASE-4583 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl >Assignee: Lars Hofhansl > Fix For: 0.96.0 > > Attachments: 4583-trunk-less-radical.txt, > 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, > 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, > 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, > 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, > 4583-v3.txt, 4583-v4.txt > > > Currently Increment and Append operations do not work with RWCC and hence a > client could see the results of multiple such operation mixed in the same > Get/Scan. > The semantics might be a bit more interesting here as upsert adds and removes > to and from the memstore. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489739#comment-13489739 ] stack commented on HBASE-7066: -- Although ADE is from the security package. If we add ADE to the shutdown signature, we are starting to pollute core w/ security. Should we have a marker Interface 'DisallowedException' or a base IOE Exception DisallowedException that ADE implements and then in core we'd check for this ... shutdown would throw this base DisallowedException (There may be a reason other than security violation why a shutdown might be prevented?) What you think? > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489737#comment-13489737 ] stack commented on HBASE-7066: -- Sounds right to me > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489708#comment-13489708 ] Francis Liu commented on HBASE-7066: So just to summarize what to do with this issue: - Change stopMaster() and shutdown() signature with "throws AccessDeniedException" and have the try-catch block rethrow AccessDeniedException (since it extends IOE) - AccessController should wrap any exception that it doesn't want ignored with an AccessDeniedException Thoughts? > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6966) "Compressed RPCs for HBase" (HBASE-5355) port to trunk
[ https://issues.apache.org/jira/browse/HBASE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489707#comment-13489707 ] Enis Soztutar commented on HBASE-6966: -- Sorry Devaraj, I've just committed HBASE-7063, you may have to rebase this. > "Compressed RPCs for HBase" (HBASE-5355) port to trunk > -- > > Key: HBASE-6966 > URL: https://issues.apache.org/jira/browse/HBASE-6966 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC >Reporter: Devaraj Das >Assignee: Devaraj Das > Fix For: 0.96.0 > > Attachments: 6966-1.patch, 6966-v1.1.txt, 6966-v2.txt > > > This jira will address the port of the compressed RPC implementation to > trunk. I am expecting the patch to be significantly different due to the PB > stuff in trunk, and hence filed a separate jira. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks
[ https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489704#comment-13489704 ] Francis Liu commented on HBASE-7066: Stack, Good to know, we can do it in increments Master, RS, TokenProvider, etc. Will try and take a look what it would take when I have time. > Some HMaster coprocessor exceptions are being swallowed in try catch blocks > --- > > Key: HBASE-7066 > URL: https://issues.apache.org/jira/browse/HBASE-7066 > Project: HBase > Issue Type: Bug > Components: Coprocessors, security >Affects Versions: 0.94.2, 0.96.0 >Reporter: Francis Liu >Assignee: Francis Liu >Priority: Critical > Attachments: HBASE-7066_94.patch, HBASE-7066_trunk.patch, > HBASE-7066_trunk.patch > > > This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even > when an AccessDeniedException is thrown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7063) Package name for compression should not contain hfile
[ https://issues.apache.org/jira/browse/HBASE-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7063: - Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed this. Thanks Stack for the review. > Package name for compression should not contain hfile > -- > > Key: HBASE-7063 > URL: https://issues.apache.org/jira/browse/HBASE-7063 > Project: HBase > Issue Type: Improvement > Components: HFile, io >Affects Versions: 0.96.0 >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Labels: noob > Fix For: 0.96.0 > > Attachments: hbase-7063_v1.patch > > > Compression codecs does not have any hfile specific functionality, and can be > used elsewhere (RPC, hlog, see: HBASE-6966) > We should move o.a.h.h.io.hfile.Compession and related files from io.hfile > pacakge. We can use io.compress to be in line with hadoop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira