[jira] [Commented] (HBASE-9113) Expose region statistics on table.jsp
[ https://issues.apache.org/jira/browse/HBASE-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13733809#comment-13733809 ] samar commented on HBASE-9113: -- [~bbeaudreault] Write now I have not introduced any new APIs.. I think its a good Idea to expose TableStatus/RegionStatus as client side api. We can take it up as a separate JIRA. As suggested by the team. Expose region statistics on table.jsp - Key: HBASE-9113 URL: https://issues.apache.org/jira/browse/HBASE-9113 Project: HBase Issue Type: New Feature Components: Admin, UI Reporter: Bryan Beaudreault Assignee: samar Priority: Minor Attachments: Screen Shot-table-details-V1.png While Hannibal (https://github.com/sentric/hannibal) is great, the goal should be to eventually make it obsolete by providing the same features in the main HBase web UI (and HBaseAdmin API). The first step for that is region statistics on the table.jsp. Please provide the same statistics per-region on table.jsp as in rs-status.jsp. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9113) Expose region statistics on table.jsp
[ https://issues.apache.org/jira/browse/HBASE-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731690#comment-13731690 ] samar commented on HBASE-9113: -- [~jmspaggi] Have done a good test, just finished the coding. Will try with huge region names(table names) [~stack] This is the table.jsp page. We can reach this page after clicking the table Name from master-status page. No new code was introduced but most of it UI(Copied it from the region page) but most of the code is just calling the new apis on the existing instances. [~bbeaudreault] Cluster would be a good idea, I think we can put that as new page or add to the existing home page. Usually Ambari/Cloudera Manager has something like. @Everyone : Please suggest some more table level stats . I wanted to put the percentage along with number , in case of size distribution. Your thoughts Expose region statistics on table.jsp - Key: HBASE-9113 URL: https://issues.apache.org/jira/browse/HBASE-9113 Project: HBase Issue Type: New Feature Components: Admin, UI Reporter: Bryan Beaudreault Assignee: samar Priority: Minor Attachments: Screen Shot-table-details-V1.png While Hannibal (https://github.com/sentric/hannibal) is great, the goal should be to eventually make it obsolete by providing the same features in the main HBase web UI (and HBaseAdmin API). The first step for that is region statistics on the table.jsp. Please provide the same statistics per-region on table.jsp as in rs-status.jsp. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9113) Expose region statistics on table.jsp
[ https://issues.apache.org/jira/browse/HBASE-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-9113: - Attachment: Screen Shot-table-details-V1.png Attaching a screen shot of how it is shaping up. Please validate if this make sense so that I can proceed with the development Expose region statistics on table.jsp - Key: HBASE-9113 URL: https://issues.apache.org/jira/browse/HBASE-9113 Project: HBase Issue Type: New Feature Components: Admin, UI Reporter: Bryan Beaudreault Assignee: samar Priority: Minor Attachments: Screen Shot-table-details-V1.png While Hannibal (https://github.com/sentric/hannibal) is great, the goal should be to eventually make it obsolete by providing the same features in the main HBase web UI (and HBaseAdmin API). The first step for that is region statistics on the table.jsp. Please provide the same statistics per-region on table.jsp as in rs-status.jsp. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9113) Expose region statistics on table.jsp
[ https://issues.apache.org/jira/browse/HBASE-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar reassigned HBASE-9113: Assignee: samar Expose region statistics on table.jsp - Key: HBASE-9113 URL: https://issues.apache.org/jira/browse/HBASE-9113 Project: HBase Issue Type: New Feature Components: Admin, UI Reporter: Bryan Beaudreault Assignee: samar Priority: Minor While Hannibal (https://github.com/sentric/hannibal) is great, the goal should be to eventually make it obsolete by providing the same features in the main HBase web UI (and HBaseAdmin API). The first step for that is region statistics on the table.jsp. Please provide the same statistics per-region on table.jsp as in rs-status.jsp. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: HBASE-4360_5.patch formatting error corrected Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase_multiple_server_test.png, ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, HBASE-4360_5.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700718#comment-13700718 ] samar commented on HBASE-4360: -- [~nkeywal] may be you can try the patch in you environment . Would be a double check. Also a UI review would be helpful :-) Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase_multiple_server_test.png, ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, HBASE-4360_5.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700973#comment-13700973 ] samar commented on HBASE-4360: -- thanks [~nkeywal] for catching the issue Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase_multiple_server_test.png, ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, HBASE-4360_5.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699224#comment-13699224 ] samar commented on HBASE-4360: -- [~nkeywal] Oh.. that looks bad.. Will fix it . Will test it with more RS. Thanks for pointing out. Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: Screen Shot 2013-07-03 at 11.46.01 PM.png multiple dead servers Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png, Screen Shot 2013-07-03 at 11.46.01 PM.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: (was: Screen Shot 2013-07-03 at 11.46.01 PM.png) Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase_multiple_server_test.png, ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: ds_hbase_multiple_server_test.png multiple dead server Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase_multiple_server_test.png, ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699286#comment-13699286 ] samar commented on HBASE-4360: -- A UI ignorance.. will submit the changed patch after the modification. Will do couple more tests .. Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: ds_hbase_multiple_server_test.png, ds_hbase.png, HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696613#comment-13696613 ] samar commented on HBASE-4360: -- [~nkeywal] suggest the formatting can adjusted while committing. If that is holding it back may be i submitted the revised patch Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696662#comment-13696662 ] samar commented on HBASE-4360: -- [~nkeywal] thanks :-) Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Fix For: 0.98.0, 0.95.2 Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13692875#comment-13692875 ] samar commented on HBASE-4360: -- I have posted a screen shot too.. have made minor modification which looks better(IMO) Review from a UI expert will be better Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: HBASE-4360_4.patch test fixed Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, HBASE-4360_4.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: HBASE-4360_3.patch added a new function to DeadServers , with least changes Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar reassigned HBASE-4360: Assignee: samar Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Assignee: samar Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, HBASE-4360_3.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13681977#comment-13681977 ] samar commented on HBASE-4360: -- [~nkeywal] point 1) yes i agree will make the changes point 2)Yes probably something like ServerStats or ServerInfo will make sense , if we stick to adding the new field in current ServerName The easier approach(with less changes) would be add a function like getDeathTime in DeadServer which would get the time from the map. But I prefer if we added deathtime to server. which now represents a dead server. and make necessary changes to the DeadServer class . like we replace the map now with a set and make necessary suggestions. The changes will be little more than now. Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-8651) Result of integer multiplication cast to long in HRegionFileSystem#sleepBeforeRetry()
[ https://issues.apache.org/jira/browse/HBASE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar reassigned HBASE-8651: Assignee: samar Result of integer multiplication cast to long in HRegionFileSystem#sleepBeforeRetry() - Key: HBASE-8651 URL: https://issues.apache.org/jira/browse/HBASE-8651 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: samar Priority: Minor {code} Threads.sleep(baseSleepBeforeRetries * sleepMultiplier); {code} Both baseSleepBeforeRetries and sleepMultiplier are integers. Without proper casting, their product may be negative. Here is an example: {code} static int i = Integer.MAX_VALUE-1; static long j = i * 2; {code} value of j above is -4 while 4294967292 was the expected value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8651) Result of integer multiplication cast to long in HRegionFileSystem#sleepBeforeRetry()
[ https://issues.apache.org/jira/browse/HBASE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8651: - Attachment: HBASE-8651.patch Result of integer multiplication cast to long in HRegionFileSystem#sleepBeforeRetry() - Key: HBASE-8651 URL: https://issues.apache.org/jira/browse/HBASE-8651 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: samar Priority: Minor Attachments: HBASE-8651.patch {code} Threads.sleep(baseSleepBeforeRetries * sleepMultiplier); {code} Both baseSleepBeforeRetries and sleepMultiplier are integers. Without proper casting, their product may be negative. Here is an example: {code} static int i = Integer.MAX_VALUE-1; static long j = i * 2; {code} value of j above is -4 while 4294967292 was the expected value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: HBASE-4360_2.patch with modified testcase Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Status: Patch Available (was: Open) Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Attachments: HBASE-4360_1.patch, HBASE-4360_2.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: master-status1.png screen shot.. made some minor UI adjustments too Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Attachments: master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead
[ https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-4360: - Attachment: HBASE-4360_1.patch version one patch Maintain information on the time a RS went dead --- Key: HBASE-4360 URL: https://issues.apache.org/jira/browse/HBASE-4360 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.94.0 Reporter: Harsh J Priority: Minor Attachments: HBASE-4360_1.patch, master-status1.png Just something that'd be generally helpful, is to maintain DeadServer info with the last timestamp when it was determined as dead. Makes it easier to hunt the logs, and I don't think its much too expensive to maintain (one additional update per dead determination). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8652) Number of compacting KVs is not reset at the end of compaction
[ https://issues.apache.org/jira/browse/HBASE-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672331#comment-13672331 ] samar commented on HBASE-8652: -- Can we instead just change Heading on the master status page to someting like Total compaction KVs. or instead of resting to show that its done.. We could gracefully display (totalCompactingKVs - currentCompactedKVs) which would finally become 0 after its 100% complete Number of compacting KVs is not reset at the end of compaction -- Key: HBASE-8652 URL: https://issues.apache.org/jira/browse/HBASE-8652 Project: HBase Issue Type: Bug Reporter: Ted Yu Priority: Minor Looking at master:60010/master-status#compactStas , I noticed that 'Num. Compacting KVs' column stays unchanged at non-zero value(s). In DefaultCompactor#compact(), we have this at the beginning: {code} this.progress = new CompactionProgress(fd.maxKeyCount); {code} But progress.totalCompactingKVs is not reset at the end of compact(). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5110) code enhancement - remove unnecessary if-checks in every loop in HLog class
[ https://issues.apache.org/jira/browse/HBASE-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-5110: - Attachment: HBASE-5110_1.patch code enhancement - remove unnecessary if-checks in every loop in HLog class --- Key: HBASE-5110 URL: https://issues.apache.org/jira/browse/HBASE-5110 Project: HBase Issue Type: Improvement Components: wal Affects Versions: 0.90.1, 0.90.2, 0.90.4, 0.92.0 Reporter: Mikael Sitruk Priority: Minor Attachments: HBASE-5110_1.patch The HLog class (method findMemstoresWithEditsEqualOrOlderThan) has unnecessary if check in a loop. static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = null; for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { if (regions == null) regions = new ArrayListbyte [](); regions.add(e.getKey()); } } return regions == null? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } The following change is suggested static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = new ArrayListbyte [](); for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { regions.add(e.getKey()); } } return regions.size() == 0? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5110) code enhancement - remove unnecessary if-checks in every loop in HLog class
[ https://issues.apache.org/jira/browse/HBASE-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-5110: - Status: Patch Available (was: Open) code enhancement - remove unnecessary if-checks in every loop in HLog class --- Key: HBASE-5110 URL: https://issues.apache.org/jira/browse/HBASE-5110 Project: HBase Issue Type: Improvement Components: wal Affects Versions: 0.92.0, 0.90.4, 0.90.2, 0.90.1 Reporter: Mikael Sitruk Priority: Minor Attachments: HBASE-5110_1.patch The HLog class (method findMemstoresWithEditsEqualOrOlderThan) has unnecessary if check in a loop. static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = null; for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { if (regions == null) regions = new ArrayListbyte [](); regions.add(e.getKey()); } } return regions == null? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } The following change is suggested static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = new ArrayListbyte [](); for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { regions.add(e.getKey()); } } return regions.size() == 0? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3616) Add per region request information to HServerLoad
[ https://issues.apache.org/jira/browse/HBASE-3616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13668269#comment-13668269 ] samar commented on HBASE-3616: -- [~yuzhih...@gmail.com] I alread see the request counts being stored in regionload buildServerLoad{ serverLoad.addRegionLoads(createRegionLoad(region)); } which is adding the createRegionLoad{ . does ... .setReadRequestsCount((int) r.readRequestsCount.get()) .setWriteRequestsCount((int) r.writeRequestsCount.get()) } Are we planning to do anything different Add per region request information to HServerLoad - Key: HBASE-3616 URL: https://issues.apache.org/jira/browse/HBASE-3616 Project: HBase Issue Type: Improvement Components: master Reporter: Ted Yu HBASE-3507 added per region request count. We should utilize this information so that HServerLoad can provide moving average of request counts to load balancer. We can update this method in HRegionServer: {code} private HServerLoad buildServerLoad() { {code} The above method can aggregate request counts from HRegions and store it in HServerLoad.RegionLoad -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5024) A thread named LruBlockCache.EvictionThread remains after the shutdown of a cluster
[ https://issues.apache.org/jira/browse/HBASE-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13667695#comment-13667695 ] samar commented on HBASE-5024: -- Hope i got it right in class HRegionServer if (cacheConfig.isBlockCacheEnabled()) { cacheConfig.getBlockCache().shutdown(); } LruBlockCache.shutdown has { ... this.evictionThread.shutdown(); } which should shutdown the eviction thread. A thread named LruBlockCache.EvictionThread remains after the shutdown of a cluster --- Key: HBASE-5024 URL: https://issues.apache.org/jira/browse/HBASE-5024 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Nicolas Liochon Priority: Minor There is no cleanup function in hbase.io.hfile.CacheConfig. The cache is a singleton, shared by all cluster if we launch more than one cluster on a test. Related code is: {noformat} /** * Static reference to the block cache, or null if no caching should be used * at all. */ private static BlockCache globalBlockCache; /** Boolean whether we have disabled the block cache entirely. */ private static boolean blockCacheDisabled = false; /** * Returns the block cache or codenull/code in case none should be used. * * @param conf The current configuration. * @return The block cache or codenull/code. */ private static synchronized BlockCache instantiateBlockCache(){ // initiate globalBlockCache {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8570) CompactSplitThread logs a CompactSplitThread$CompactionRunner but it does not have a toString
[ https://issues.apache.org/jira/browse/HBASE-8570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8570: - Status: Patch Available (was: Open) CompactSplitThread logs a CompactSplitThread$CompactionRunner but it does not have a toString - Key: HBASE-8570 URL: https://issues.apache.org/jira/browse/HBASE-8570 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0 Reporter: Nicolas Liochon Assignee: samar Priority: Trivial Attachments: HBASE-8570_1.patch 2013-05-17 13:51:35,664 ERROR [regionserver60020-smallCompactions-1368795988827] org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction failed org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner@1a7abea0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7278) Some bugs of HTableDesciptor
[ https://issues.apache.org/jira/browse/HBASE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13666100#comment-13666100 ] samar commented on HBASE-7278: -- can we create a seperate jira to remove setName(byte[] name)? isLegalTableName() since its returning what is passed we can name it something like void validateTableName(), where we would not expect anything in return but will do all the checks as already implemented Some bugs of HTableDesciptor Key: HBASE-7278 URL: https://issues.apache.org/jira/browse/HBASE-7278 Project: HBase Issue Type: Bug Reporter: Hiroshi Ikeda Priority: Minor There are some bugs of the class HTableDescriptor. {code} public HTableDescriptor(final byte [] name) { super(); setMetaFlags(this.name); this.name = this.isMetaRegion()? name: isLegalTableName(name); this.nameAsString = Bytes.toString(this.name); } {code} I think setMetaFlags(this.name) should be setMetaFlags(name). {code} /** * Check passed byte buffer, tableName, is legal user-space table name. * @return Returns passed codetableName/code param * @throws NullPointerException If passed codetableName/code is null * @throws IllegalArgumentException if passed a tableName * that is made of other than 'word' characters or underscores: i.e. * code[a-zA-Z_0-9]. */ public static byte [] isLegalTableName(final byte [] tableName) { if (tableName == null || tableName.length = 0) { throw new IllegalArgumentException(Name is null or empty); } {code} The implementation is against the contract of throwing NullPointerException. I'm not sure the contract is wrong or the implementation is wrong. Also the contract of throwing IllegalArgumentException is a little different from the actual implementation, and in general we must actually call this method and catch IllegalArgumentException in order to know whether the given name can be used as a table name. I feel HTableDescriptor allows itself to be in invalid states, and I cannot fix the class well. I think we should start to remove implementing WritableComparable, but it might greatly break the compatibility. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8570) CompactSplitThread logs a CompactSplitThread$CompactionRunner but it does not have a toString
[ https://issues.apache.org/jira/browse/HBASE-8570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8570: - Attachment: HBASE-8570_1.patch toString overridden . used getRequest because CompactionRequest#toString has lot of details for the compaction CompactSplitThread logs a CompactSplitThread$CompactionRunner but it does not have a toString - Key: HBASE-8570 URL: https://issues.apache.org/jira/browse/HBASE-8570 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0 Reporter: Nicolas Liochon Priority: Trivial Attachments: HBASE-8570_1.patch 2013-05-17 13:51:35,664 ERROR [regionserver60020-smallCompactions-1368795988827] org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction failed org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner@1a7abea0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8336) PooledHTable may be returned multiple times to the same pool
[ https://issues.apache.org/jira/browse/HBASE-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8336: - Attachment: HBASE-8336_2.patch review comments incorporated PooledHTable may be returned multiple times to the same pool Key: HBASE-8336 URL: https://issues.apache.org/jira/browse/HBASE-8336 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.95.0 Reporter: Nikolai Grigoriev Priority: Minor Attachments: HBASE-8336_1.patch, HBASE-8336_2.patch I have recently observed a very strange issue with an application using HBase and HTablePool. After an investigation I have found that the root cause was the piece of code that was calling close() twice on the same HTableInterface instance retrieved from HTablePool (created with default policy). A closer look at the code revealed that PooledHTable.close() calls returnTable(), which, in turn, places the table back into the QUEUE of the pooled tables. No checking of any kind is done so it is possible to call it multiple times and place multiple references to the same HTable into the same pool. This creates a number of negative effects: - pool grows on each close() call and eventually gets filled up with the references to the same HTable. From this moment the pool stops working as pool. - multiple callers will get the same instance of HTable while expecting to have unique instances - once the pool is full, next call to close() will result to the call to the real close() method of HTable. This will make HTable unusable as close() call may shutdown() the internal thread pool. From this moment other attempts to use this HTable will fail with RejectedExecutionException. And since the HTablePool will have additional references to that HTable, other users of the pool will just start failing on any call that leads to flushCommits() The problem was, obviously, triggered by bad code on our side. But I think the pool has to be protected. Probably the best way to fix it would be to implement a flag in PooledHTable that represent its state (leased/returned) and once close() is called, it would be returned. From this moment any operations on this PooledHTable would result in something like IllegalStateException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8336) PooledHTable may be returned multiple times to the same pool
[ https://issues.apache.org/jira/browse/HBASE-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8336: - Attachment: HBASE-8336_1.patch added a flag to mark the PooledHTable open/close PooledHTable may be returned multiple times to the same pool Key: HBASE-8336 URL: https://issues.apache.org/jira/browse/HBASE-8336 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.95.0 Reporter: Nikolai Grigoriev Priority: Minor Attachments: HBASE-8336_1.patch I have recently observed a very strange issue with an application using HBase and HTablePool. After an investigation I have found that the root cause was the piece of code that was calling close() twice on the same HTableInterface instance retrieved from HTablePool (created with default policy). A closer look at the code revealed that PooledHTable.close() calls returnTable(), which, in turn, places the table back into the QUEUE of the pooled tables. No checking of any kind is done so it is possible to call it multiple times and place multiple references to the same HTable into the same pool. This creates a number of negative effects: - pool grows on each close() call and eventually gets filled up with the references to the same HTable. From this moment the pool stops working as pool. - multiple callers will get the same instance of HTable while expecting to have unique instances - once the pool is full, next call to close() will result to the call to the real close() method of HTable. This will make HTable unusable as close() call may shutdown() the internal thread pool. From this moment other attempts to use this HTable will fail with RejectedExecutionException. And since the HTablePool will have additional references to that HTable, other users of the pool will just start failing on any call that leads to flushCommits() The problem was, obviously, triggered by bad code on our side. But I think the pool has to be protected. Probably the best way to fix it would be to implement a flag in PooledHTable that represent its state (leased/returned) and once close() is called, it would be returned. From this moment any operations on this PooledHTable would result in something like IllegalStateException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8336) PooledHTable may be returned multiple times to the same pool
[ https://issues.apache.org/jira/browse/HBASE-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8336: - Status: Patch Available (was: Open) PooledHTable may be returned multiple times to the same pool Key: HBASE-8336 URL: https://issues.apache.org/jira/browse/HBASE-8336 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.95.0 Reporter: Nikolai Grigoriev Priority: Minor Attachments: HBASE-8336_1.patch I have recently observed a very strange issue with an application using HBase and HTablePool. After an investigation I have found that the root cause was the piece of code that was calling close() twice on the same HTableInterface instance retrieved from HTablePool (created with default policy). A closer look at the code revealed that PooledHTable.close() calls returnTable(), which, in turn, places the table back into the QUEUE of the pooled tables. No checking of any kind is done so it is possible to call it multiple times and place multiple references to the same HTable into the same pool. This creates a number of negative effects: - pool grows on each close() call and eventually gets filled up with the references to the same HTable. From this moment the pool stops working as pool. - multiple callers will get the same instance of HTable while expecting to have unique instances - once the pool is full, next call to close() will result to the call to the real close() method of HTable. This will make HTable unusable as close() call may shutdown() the internal thread pool. From this moment other attempts to use this HTable will fail with RejectedExecutionException. And since the HTablePool will have additional references to that HTable, other users of the pool will just start failing on any call that leads to flushCommits() The problem was, obviously, triggered by bad code on our side. But I think the pool has to be protected. Probably the best way to fix it would be to implement a flag in PooledHTable that represent its state (leased/returned) and once close() is called, it would be returned. From this moment any operations on this PooledHTable would result in something like IllegalStateException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting
[ https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660509#comment-13660509 ] samar commented on HBASE-8518: -- the test testRecoveredEditsReplayCompaction is centered around the hbase.hstore.compaction.complete setting. First it sets it so that compaction happens.. then it manually tries to complete the compaction by moving the files from the tmp and writeCompactionWalRecord. As the variable would be gone there would be no files in tmp, and compaction would have been written to wal. with the hbase.hstore.compaction.complete gone would it still make sense the have the test. The only way to still have it is we could not let the compaction complete happen. Get rid of hbase.hstore.compaction.complete setting --- Key: HBASE-8518 URL: https://issues.apache.org/jira/browse/HBASE-8518 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor Labels: noob Attachments: HBASE-8518-1.patch hbase.hstore.compaction.complete is a strange setting that causes the finished compaction to not complete (files are just left in tmp) in HStore. It's used by one test. The setting with the same name is also used by CompactionTool, but that usage is semi-unrelated and could probably be removed easily. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting
[ https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660511#comment-13660511 ] samar commented on HBASE-8518: -- My last statement was not clear. I meant to say , we can have the test case , if we can allow the compaction to happen but compaction complete to fail. Suggestions please Get rid of hbase.hstore.compaction.complete setting --- Key: HBASE-8518 URL: https://issues.apache.org/jira/browse/HBASE-8518 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor Labels: noob Attachments: HBASE-8518-1.patch hbase.hstore.compaction.complete is a strange setting that causes the finished compaction to not complete (files are just left in tmp) in HStore. It's used by one test. The setting with the same name is also used by CompactionTool, but that usage is semi-unrelated and could probably be removed easily. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8336) PooledHTable may be returned multiple times to the same pool
[ https://issues.apache.org/jira/browse/HBASE-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660584#comment-13660584 ] samar commented on HBASE-8336: -- Would it be ok if we removed the HTable from the pool while (getTable) using it and adding it back while trying to call PooledHTable#.close() This way we avoid anyone using the same HTable , also the pool would never have the same instance . While deleting we can simply call tableFactory.releaseHTableInterface if the size is bigger than the poolsize or add it back to the pool Looks too simple so please point if i missed anything. PooledHTable may be returned multiple times to the same pool Key: HBASE-8336 URL: https://issues.apache.org/jira/browse/HBASE-8336 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.95.0 Reporter: Nikolai Grigoriev Priority: Minor I have recently observed a very strange issue with an application using HBase and HTablePool. After an investigation I have found that the root cause was the piece of code that was calling close() twice on the same HTableInterface instance retrieved from HTablePool (created with default policy). A closer look at the code revealed that PooledHTable.close() calls returnTable(), which, in turn, places the table back into the QUEUE of the pooled tables. No checking of any kind is done so it is possible to call it multiple times and place multiple references to the same HTable into the same pool. This creates a number of negative effects: - pool grows on each close() call and eventually gets filled up with the references to the same HTable. From this moment the pool stops working as pool. - multiple callers will get the same instance of HTable while expecting to have unique instances - once the pool is full, next call to close() will result to the call to the real close() method of HTable. This will make HTable unusable as close() call may shutdown() the internal thread pool. From this moment other attempts to use this HTable will fail with RejectedExecutionException. And since the HTablePool will have additional references to that HTable, other users of the pool will just start failing on any call that leads to flushCommits() The problem was, obviously, triggered by bad code on our side. But I think the pool has to be protected. Probably the best way to fix it would be to implement a flag in PooledHTable that represent its state (leased/returned) and once close() is called, it would be returned. From this moment any operations on this PooledHTable would result in something like IllegalStateException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting
[ https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8518: - Status: Patch Available (was: Open) Get rid of hbase.hstore.compaction.complete setting --- Key: HBASE-8518 URL: https://issues.apache.org/jira/browse/HBASE-8518 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor Labels: noob hbase.hstore.compaction.complete is a strange setting that causes the finished compaction to not complete (files are just left in tmp) in HStore. It's used by one test. The setting with the same name is also used by CompactionTool, but that usage is semi-unrelated and could probably be removed easily. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting
[ https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-8518: - Attachment: HBASE-8518-1.patch Version 1: Removed the setting and the test case since it tested the recovery from the tmp directory which would have any files anymore Get rid of hbase.hstore.compaction.complete setting --- Key: HBASE-8518 URL: https://issues.apache.org/jira/browse/HBASE-8518 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor Labels: noob Attachments: HBASE-8518-1.patch hbase.hstore.compaction.complete is a strange setting that causes the finished compaction to not complete (files are just left in tmp) in HStore. It's used by one test. The setting with the same name is also used by CompactionTool, but that usage is semi-unrelated and could probably be removed easily. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting
[ https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659415#comment-13659415 ] samar commented on HBASE-8518: -- TestMasterShutdown test failure may not be related to the current patch Get rid of hbase.hstore.compaction.complete setting --- Key: HBASE-8518 URL: https://issues.apache.org/jira/browse/HBASE-8518 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor Labels: noob Attachments: HBASE-8518-1.patch hbase.hstore.compaction.complete is a strange setting that causes the finished compaction to not complete (files are just left in tmp) in HStore. It's used by one test. The setting with the same name is also used by CompactionTool, but that usage is semi-unrelated and could probably be removed easily. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting
[ https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658251#comment-13658251 ] samar commented on HBASE-8518: -- Looks like a flag which allow compacted files to be created but not used. May be someone who wants to see the time /size of compaction without affecting the stores. Does not seem very useful. Get rid of hbase.hstore.compaction.complete setting --- Key: HBASE-8518 URL: https://issues.apache.org/jira/browse/HBASE-8518 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Priority: Minor Labels: noob hbase.hstore.compaction.complete is a strange setting that causes the finished compaction to not complete (files are just left in tmp) in HStore. It's used by one test. The setting with the same name is also used by CompactionTool, but that usage is semi-unrelated and could probably be removed easily. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7402) java.io.IOException: Got error in response to OP_READ_BLOCK
samar created HBASE-7402: Summary: java.io.IOException: Got error in response to OP_READ_BLOCK Key: HBASE-7402 URL: https://issues.apache.org/jira/browse/HBASE-7402 Project: HBase Issue Type: Bug Components: HFile Affects Versions: 0.94.0, 0.90.4 Reporter: samar Getting this error on our hbase version 0.90.4-cdh3u3 2012-12-18 02:35:39,082 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x:x for file /hbase/table_x/37bea13d03ed9fa611941cc4aad6e8c2/scores/7355825801969613604 for block 3174705353677971357:java.io.IOException: Got error in response to OP_READ_BLOCK self=/x.x.x.x, remote=/x.x.x.x: for file /hbase/table_x/37bea13d03ed9fa611941cc4aad6e8c2/scores/7355825801969613604 for block 3174705353677971357_1028665 at org.apache.hadoop.hdfs.DFSClient$RemoteBlockReader.newBlockReader(DFSClient.java:1673) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.getBlockReader(DFSClient.java:2383) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2272) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2438) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46) at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141) at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1094) at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036) at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1446) at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1303) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:136) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:96) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:77) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1405) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.init(HRegion.java:2467) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1192) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1184) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1168) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3215) this causes the HBase RS to hang and hence stops responding. In NameNode the block was delete before.. ( as per the timestamp) 2012-12-18 02:25:19,027 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask x.x.x.x:x to delete blk_3174705353677971357_1028665 blk_-9072685530813588257_1028824 2012-12-18 02:25:19,027 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask x.x.x.x:x to delete blk_5651962510569886604_1028711 2012-12-18 02:25:22,027 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask x.x.x.x:x to delete blk_3174705353677971357_1028665 Looks like org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream is cacheing the block location and causing this issue -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira