[jira] [Commented] (HBASE-5349) Automagically tweak global memstore and block cache sizes based on workload
[ https://issues.apache.org/jira/browse/HBASE-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761642#comment-13761642 ] Anoop Sam John commented on HBASE-5349: --- Stack Ted thanks for the review.. Working on the comments and also adding tests. Will update the patch soon. Automagically tweak global memstore and block cache sizes based on workload --- Key: HBASE-5349 URL: https://issues.apache.org/jira/browse/HBASE-5349 Project: HBase Issue Type: Improvement Affects Versions: 0.92.0 Reporter: Jean-Daniel Cryans Assignee: Anoop Sam John Attachments: WIP_HBASE-5349.patch Hypertable does a neat thing where it changes the size given to the CellCache (our MemStores) and Block Cache based on the workload. If you need an image, scroll down at the bottom of this link: http://www.hypertable.com/documentation/architecture/ That'd be one less thing to configure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9436: --- Status: Patch Available (was: Open) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9436: --- Attachment: 9436.v2.patch hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9436: --- Status: Open (was: Patch Available) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9463) Fix comments around alter tables
Nicolas Liochon created HBASE-9463: -- Summary: Fix comments around alter tables Key: HBASE-9463 URL: https://issues.apache.org/jira/browse/HBASE-9463 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Some are outdated. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9463) Fix comments around alter tables
[ https://issues.apache.org/jira/browse/HBASE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9463: --- Attachment: 9463.v1.patch Fix comments around alter tables Key: HBASE-9463 URL: https://issues.apache.org/jira/browse/HBASE-9463 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9463.v1.patch Some are outdated. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9463) Fix comments around alter tables
[ https://issues.apache.org/jira/browse/HBASE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9463: --- Status: Patch Available (was: Open) Fix comments around alter tables Key: HBASE-9463 URL: https://issues.apache.org/jira/browse/HBASE-9463 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9463.v1.patch Some are outdated. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase
[ https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761704#comment-13761704 ] haosdent commented on HBASE-5954: - my tests above were run with write barriers enabled and data=ordered. [~lhofhansl]It seems very interesting. Did you use RAID? Allow proper fsync support for HBase Key: HBASE-5954 URL: https://issues.apache.org/jira/browse/HBASE-5954 Project: HBase Issue Type: Improvement Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Critical Fix For: 0.98.0 Attachments: 5954-trunk-hdfs-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, hbase-hdfs-744.txt At least get recommendation into 0.96 doc and some numbers running w/ this hdfs feature enabled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9347) Support for adding filters for client requests
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9347: Attachment: HBASE-9347_trunk.00.trunk Support for adding filters for client requests -- Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9347) Support for adding filters for client requests
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9347: Attachment: (was: HBASE-9347_trunk.00.trunk) Support for adding filters for client requests -- Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9347) Support for adding filters for client requests
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9347: Attachment: HBASE-9347_trunk.00.patch Patch for trunk. Support for adding filters for client requests -- Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7590) Add a costless notifications mechanism from master to regionservers clients
[ https://issues.apache.org/jira/browse/HBASE-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-7590: --- Release Note: This allows to setup a multicast connection between the master and the hbase clients. With the feature on, when a regionserver is marked as dead by the master, the master sends as well a multicast message that will make the hbase client to disconnect immediately from the dead server instead of waiting for a socket timeout. Specifically, this allows to set hbase.rpc.timeout to larger values (like 5 minutes) without impacting the MTTR: without this, even if the dead regionserver data is now available on another server, the client stays on the dead one, waiting for an answer that will never come. It's a multicast message, hence cheap, scalable, but unreliable. For this reason, the master sends the information 5 times, to allow the hbase client to miss a message. This feature is NOT activated by default. To activate it, add to your hbase-site.xml: property namehbase.status.published/name valuetrue/value /property You can as well configure the ip address and port used with the following setting: property namehbase.status.multicast.address.ip/name value226.1.1.3/value /property property namehbase.status.multicast.address.port/name value6100/value /property was: This allows to setup a multicast connection between the master and the hbase clients. With the feature on, when a regionserver is marked as dead by the master, the master sends as well a multicast message that will make the hbase client to disconnect immediately from the dead server instead of waiting for a socket timeout. Specifically, this allows to set hbase.rpc.timeout to larger values (like 5 minutes) without impacting the MTTR: without this, even if the dead regionserver data is now available on another server, the client stays on the dead one, waiting for an answer that will never come. It's a multicast message, hence cheap, scalable, but unreliable. For this reason, the master sends the information 5 times, to allow the hbase client to miss a message. This feature is NOT activated by default. To activate it, add to your hbase-site.xml: property namehbase.status.publisher.class/name valueorg.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher/value /property property namehbase.status.listener.class/name valueorg.apache.hadoop.hbase.client.ClusterStatusListener$MultiCastListener/value /property You can as well configure the ip address and port used with the following setting: property namehbase.status.multicast.address.ip/name value226.1.1.3/value /property property namehbase.status.multicast.address.port/name value6100/value /property Add a costless notifications mechanism from master to regionservers clients - Key: HBASE-7590 URL: https://issues.apache.org/jira/browse/HBASE-7590 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.95.0 Attachments: 7590.inprogress.patch, 7590.v12.patch, 7590.v12.patch, 7590.v13.patch, 7590.v1.patch, 7590.v1-rebased.patch, 7590.v2.patch, 7590.v3.patch, 7590.v5.patch, 7590.v5.patch t would be very useful to add a mechanism to distribute some information to the clients and regionservers. Especially It would be useful to know globally (regionservers + clients apps) that some regionservers are dead. This would allow: - to lower the load on the system, without clients using staled information and going on dead machines - to make the recovery faster from a client point of view. It's common to use large timeouts on the client side, so the client may need a lot of time before declaring a region server dead and trying another one. If the client receives the information separatly about a region server states, it can take the right decision, and continue/stop to wait accordingly. We can also send more information, for example instructions like 'slow down' to instruct the client to increase the retries delay and so on. Technically, the master could send this information. To lower the load on the system, we should: - have a multicast communication (i.e. the master does not have to connect to all servers by tcp), with once packet every 10 seconds or so. - receivers should not depend on this: if the information is available great. If not, it should not break anything. - it should be optional. So at the end we would have a thread in the master sending a protobuf message about the dead servers on a multicast socket. If the socket is not configured, it does not do anything. On the client side,
[jira] [Updated] (HBASE-9452) Simplify the configuration of the multicast notifier
[ https://issues.apache.org/jira/browse/HBASE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9452: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the review, JD! Simplify the configuration of the multicast notifier Key: HBASE-9452 URL: https://issues.apache.org/jira/browse/HBASE-9452 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9452.v1.patch As JD pointed it out, we not consistent in the naming. As well, it could be simpler to make it run. patch for trunk, but I would like to put it in the next 0.96 rc as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7590) Add a costless notifications mechanism from master to regionservers clients
[ https://issues.apache.org/jira/browse/HBASE-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761712#comment-13761712 ] Nicolas Liochon commented on HBASE-7590: Inconsistency fixed in HBASE-9452, release note updated. Add a costless notifications mechanism from master to regionservers clients - Key: HBASE-7590 URL: https://issues.apache.org/jira/browse/HBASE-7590 Project: HBase Issue Type: Bug Components: Client, master, regionserver Affects Versions: 0.95.2 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Fix For: 0.98.0, 0.95.0 Attachments: 7590.inprogress.patch, 7590.v12.patch, 7590.v12.patch, 7590.v13.patch, 7590.v1.patch, 7590.v1-rebased.patch, 7590.v2.patch, 7590.v3.patch, 7590.v5.patch, 7590.v5.patch t would be very useful to add a mechanism to distribute some information to the clients and regionservers. Especially It would be useful to know globally (regionservers + clients apps) that some regionservers are dead. This would allow: - to lower the load on the system, without clients using staled information and going on dead machines - to make the recovery faster from a client point of view. It's common to use large timeouts on the client side, so the client may need a lot of time before declaring a region server dead and trying another one. If the client receives the information separatly about a region server states, it can take the right decision, and continue/stop to wait accordingly. We can also send more information, for example instructions like 'slow down' to instruct the client to increase the retries delay and so on. Technically, the master could send this information. To lower the load on the system, we should: - have a multicast communication (i.e. the master does not have to connect to all servers by tcp), with once packet every 10 seconds or so. - receivers should not depend on this: if the information is available great. If not, it should not break anything. - it should be optional. So at the end we would have a thread in the master sending a protobuf message about the dead servers on a multicast socket. If the socket is not configured, it does not do anything. On the client side, when we receive an information that a node is dead, we refresh the cache about it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761718#comment-13761718 ] Hadoop QA commented on HBASE-9436: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602104/9436.v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7088//console This message is automatically generated. hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9463) Fix comments around alter tables
[ https://issues.apache.org/jira/browse/HBASE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761722#comment-13761722 ] Hadoop QA commented on HBASE-9463: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602107/9463.v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7089//console This message is automatically generated. Fix comments around alter tables Key: HBASE-9463 URL: https://issues.apache.org/jira/browse/HBASE-9463 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9463.v1.patch Some are outdated. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for adding filters for client requests
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761746#comment-13761746 ] Hadoop QA commented on HBASE-9347: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602110/HBASE-9347_trunk.00.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 7 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7090//console This message is automatically generated. Support for adding filters for client requests -- Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9452) Simplify the configuration of the multicast notifier
[ https://issues.apache.org/jira/browse/HBASE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761778#comment-13761778 ] Hudson commented on HBASE-9452: --- SUCCESS: Integrated in hbase-0.96 #23 (See [https://builds.apache.org/job/hbase-0.96/23/]) HBASE-9452 Simplify the configuration of the multicast notifier (nkeywal: rev 1521000) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java Simplify the configuration of the multicast notifier Key: HBASE-9452 URL: https://issues.apache.org/jira/browse/HBASE-9452 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9452.v1.patch As JD pointed it out, we not consistent in the naming. As well, it could be simpler to make it run. patch for trunk, but I would like to put it in the next 0.96 rc as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9464) master failure during region-move can result in the region moved to a different RS rather than the destination one user specified
Feng Honghua created HBASE-9464: --- Summary: master failure during region-move can result in the region moved to a different RS rather than the destination one user specified Key: HBASE-9464 URL: https://issues.apache.org/jira/browse/HBASE-9464 Project: HBase Issue Type: Bug Components: master Reporter: Feng Honghua Priority: Minor 1. user issues region-move by specifying a destination RS 2. master finishes offlining the region 3. master fails before assigning it the the specified destination RS 4. new master assigns the region to a random RS since it doesn't have destination RS info -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9465) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster
Feng Honghua created HBASE-9465: --- Summary: HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster Key: HBASE-9465 URL: https://issues.apache.org/jira/browse/HBASE-9465 Project: HBase Issue Type: Bug Components: regionserver, Replication Reporter: Feng Honghua When region-move or RS failure occurs in master cluster, the hlog entries that are not pushed before region-move or RS-failure will be pushed by original RS(for region move) or another RS which takes over the remained hlog of dead RS(for RS failure), and the new entries for the same region(s) will be pushed by the RS which now serves the region(s), but they push the hlog entries of a same region concurrently without coordination. This treatment can possibly lead to data inconsistency between master and peer clusters: 1. there are put and then delete written to master cluster 2. due to region-move / RS-failure, they are pushed by different replication-source threads to peer cluster 3. if delete is pushed to peer cluster before put, and flush and major-compact occurs in peer cluster before put is pushed to peer cluster, the delete is collected and the put remains in peer cluster In this scenario, the put remains in peer cluster, but in master cluster the put is masked by the delete, hence data inconsistency between master and peer clusters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9452) Simplify the configuration of the multicast notifier
[ https://issues.apache.org/jira/browse/HBASE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761806#comment-13761806 ] Hudson commented on HBASE-9452: --- SUCCESS: Integrated in hbase-0.96-hadoop2 #13 (See [https://builds.apache.org/job/hbase-0.96-hadoop2/13/]) HBASE-9452 Simplify the configuration of the multicast notifier (nkeywal: rev 1521000) * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java * /hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java * /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java Simplify the configuration of the multicast notifier Key: HBASE-9452 URL: https://issues.apache.org/jira/browse/HBASE-9452 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9452.v1.patch As JD pointed it out, we not consistent in the naming. As well, it could be simpler to make it run. patch for trunk, but I would like to put it in the next 0.96 rc as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9452) Simplify the configuration of the multicast notifier
[ https://issues.apache.org/jira/browse/HBASE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761816#comment-13761816 ] Hudson commented on HBASE-9452: --- SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #718 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/718/]) HBASE-9452 Simplify the configuration of the multicast notifier (nkeywal: rev 1520999) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java Simplify the configuration of the multicast notifier Key: HBASE-9452 URL: https://issues.apache.org/jira/browse/HBASE-9452 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9452.v1.patch As JD pointed it out, we not consistent in the naming. As well, it could be simpler to make it run. patch for trunk, but I would like to put it in the next 0.96 rc as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9466) Read-only mode
Feng Honghua created HBASE-9466: --- Summary: Read-only mode Key: HBASE-9466 URL: https://issues.apache.org/jira/browse/HBASE-9466 Project: HBase Issue Type: New Feature Reporter: Feng Honghua Priority: Minor Can we provide a read-only mode for a table? write to the table in read-only mode will be rejected, but read-only mode is different from disable in that: 1. it doesn't offline the regions of the table(hence much more lightweight than disable) 2. it can serve read requests Comments? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9466) Read-only mode
[ https://issues.apache.org/jira/browse/HBASE-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761821#comment-13761821 ] Jean-Marc Spaggiari commented on HBASE-9466: I guess this can be achieve using Kerberos or something like that? But might still be a good idea to have something easier to use that K? Read-only mode -- Key: HBASE-9466 URL: https://issues.apache.org/jira/browse/HBASE-9466 Project: HBase Issue Type: New Feature Reporter: Feng Honghua Priority: Minor Can we provide a read-only mode for a table? write to the table in read-only mode will be rejected, but read-only mode is different from disable in that: 1. it doesn't offline the regions of the table(hence much more lightweight than disable) 2. it can serve read requests Comments? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region
Feng Honghua created HBASE-9467: --- Summary: write can be totally blocked temporarily by a write-heavy region Key: HBASE-9467 URL: https://issues.apache.org/jira/browse/HBASE-9467 Project: HBase Issue Type: Improvement Reporter: Feng Honghua Priority: Minor Write to a region can be blocked temporarily if the memstore of that region reaches the threshold(hbase.hregion.memstore.block.multiplier * hbase.hregion.flush.size) until the memstore of that region is flushed. For a write-heavy region, if its write requests saturates all the handler threads of that RS when write blocking for that region occurs, requests of other regions/tables to that RS also can't be served due to no available handler threads...until the pending writes of that write-heavy region are served after the flush is done. Hence during this time period, from the RS perspective it can't serve any request from any table/region just due to a single write-heavy region. This sounds not very reasonable, right? Maybe write requests from a region can only be served by a sub-set of the handler threads, and then write blocking of any single region can't lead to the scenario mentioned above? Comment? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session
Feng Honghua created HBASE-9468: --- Summary: Previous active master can still serves RPC request when it is trying recovering expired zk session Key: HBASE-9468 URL: https://issues.apache.org/jira/browse/HBASE-9468 Project: HBase Issue Type: Bug Reporter: Feng Honghua When the active master's zk session expires, it'll try to recover zk session, but without turn off its RpcServer. What if a previous backup master has already become the now active master, and some client tries to send request to this expired master by using the cached master info? Any problem here? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9469) Synchronous replication
Feng Honghua created HBASE-9469: --- Summary: Synchronous replication Key: HBASE-9469 URL: https://issues.apache.org/jira/browse/HBASE-9469 Project: HBase Issue Type: New Feature Reporter: Feng Honghua Priority: Minor Scenario: A/B clusters with master-master replication, client writes to A cluster and A pushes all writes to B cluster, and when A cluster is down, client switches writing to B cluster. But the client's write switch is unsafe due to the replication between A/B is asynchronous: a delete to B cluster which aims to delete a put written earlier can fail due to that put is written to A cluster and isn't successfully pushed to B before A is down. It can be worse if this delete is collected(flush and then major compact occurs) before A cluster is up and that put is eventually pushed to B, the put won't ever be deleted. Can we provide per-table/per-peer synchronous replication which ships the according hlog entry of write before responsing write success to client? By this we can guarantee the client that all write requests for which he got success response when he wrote to A cluster must already have been in B cluster as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761892#comment-13761892 ] Nicolas Liochon commented on HBASE-9334: I may miss something obvious, but I understand that the goal of this change is to require a recompile but not modification of the client app (Recompile of client apps likely needed after this change.). But with this change: bq. public KeyValue[] raw() { === public Cell[] raw() { A client which was calling 'raw' must now be changed to use 'Cell', no? Incidentally, it seems not possible to write a client that would work with 2 versions of HBase (i.e. modifying the client, but beeing able to compile the modified client with a previous version of HBase). I'm having the issue because I port the ycsb benchmark. It depends on raw. Lastly, audience parameter for 'cell' is private, but as it appears in a public interface I think it should be public... Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch, hbase-9334.v3.patch, hbase-9334.v4.patch, hbase-9334.v6.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761919#comment-13761919 ] Jean-Daniel Cryans commented on HBASE-9436: --- +1 hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon updated HBASE-9436: --- Resolution: Fixed Fix Version/s: 0.96.0 0.98.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9470) Change MR jobs to use Cell interface
Elliott Clark created HBASE-9470: Summary: Change MR jobs to use Cell interface Key: HBASE-9470 URL: https://issues.apache.org/jira/browse/HBASE-9470 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.96.0 Reporter: Elliott Clark Priority: Critical Map reduce jobs/input formats currently use the KeyValue class. They should be using the Cell interface. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761966#comment-13761966 ] Nicolas Liochon commented on HBASE-9334: I suppose a reasonable option would be to deprecate 'KeyValue[] raw()' and to add Cell[] rawCells(); (with same logic for all methods returning a keyValue today). Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch, hbase-9334.v3.patch, hbase-9334.v4.patch, hbase-9334.v6.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761944#comment-13761944 ] Elliott Clark commented on HBASE-9436: -- +1 lgtm hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9343: Status: Open (was: Patch Available) Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9343: Attachment: HBASE-9343_trunk.01.patch poking jenkins. Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, HBASE-9343_trunk.01.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9343: Status: Patch Available (was: Open) Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, HBASE-9343_trunk.01.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9458) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure
[ https://issues.apache.org/jira/browse/HBASE-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-9458: - Assignee: Ted Yu Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure --- Key: HBASE-9458 URL: https://issues.apache.org/jira/browse/HBASE-9458 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Attachments: 9458-v1.txt From https://builds.apache.org/job/HBase-0.96/20/testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/ : {code} org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:221) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:125) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3120) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2672) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2605) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2612) at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:336) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365) at org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2947) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851) Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via Failed taking snapshot { ss=snapshotAfterMerge table=test type=FLUSH } due to exception:Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:85) at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:318) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:355) ... 4 more Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for:
[jira] [Updated] (HBASE-9458) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure
[ https://issues.apache.org/jira/browse/HBASE-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9458: -- Attachment: 9458-v1.txt Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure --- Key: HBASE-9458 URL: https://issues.apache.org/jira/browse/HBASE-9458 Project: HBase Issue Type: Test Reporter: Ted Yu Attachments: 9458-v1.txt From https://builds.apache.org/job/HBase-0.96/20/testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/ : {code} org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:221) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:125) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3120) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2672) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2605) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2612) at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:336) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365) at org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2947) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851) Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via Failed taking snapshot { ss=snapshotAfterMerge table=test type=FLUSH } due to exception:Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:85) at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:318) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:355) ... 4 more Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for:
[jira] [Updated] (HBASE-9458) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure
[ https://issues.apache.org/jira/browse/HBASE-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9458: -- Status: Patch Available (was: Open) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure --- Key: HBASE-9458 URL: https://issues.apache.org/jira/browse/HBASE-9458 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Attachments: 9458-v1.txt From https://builds.apache.org/job/HBase-0.96/20/testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/ : {code} org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:221) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:125) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3120) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2672) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2605) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2612) at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:336) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365) at org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2947) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851) Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via Failed taking snapshot { ss=snapshotAfterMerge table=test type=FLUSH } due to exception:Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:85) at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:318) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:355) ... 4 more Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for:
[jira] [Updated] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9347: Summary: Support for enabling servlet filters for REST service (was: Support for adding filters for client requests) Updating title. Previous title was confusing, considering HBASE-9345. Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761991#comment-13761991 ] Nick Dimiduk commented on HBASE-9347: - Since it's now entirely configurable, the GzipFilter should be moved out into {{hbase-default.xml}}, giving the user full flexibility in overriding it if they desire. Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9440) Pass blocks of KVs from HFile scanner to the StoreFileScanner and up
[ https://issues.apache.org/jira/browse/HBASE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762033#comment-13762033 ] Nick Dimiduk commented on HBASE-9440: - Makes sense. How would you implement this -- provide new interfaces for {{BulkGet}}, {{BulkScan}} ? Pass blocks of KVs from HFile scanner to the StoreFileScanner and up Key: HBASE-9440 URL: https://issues.apache.org/jira/browse/HBASE-9440 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Currently we read KVs from an HFileScanner one-by-one and pass them up the scanner/heap tree. Many time the ranges of KVs retrieved from StoreFileScanner (by StoreScanners) and HFileScanner (by StoreFileScanner) will be non-overlapping. If chunks of KVs do not overlap we can sort entire chunks just by comparing the start/end key of the chunk. Only if chunks are overlapping do we need to sort KV by KV as we do now. I have no patch, but I wanted to float this idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region
[ https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762037#comment-13762037 ] Nicolas Liochon commented on HBASE-9467: It's not fully a duplicate, as this one is about ensuring that a workload of a region does not impact the others. HBASE-8836 concentrates the writers, freeing resources for the reader. But an heavy write on a region would block the others. In any case, if you have a fixed number of worker thread, the activity of someone in the system can block the others (vs. only slowing them). There is no definitive solutions for this (if it's slow and you add queries it will become slower: like it or not, the regionserver shares its resources between all clients, and the load of a region impacts the others). I tend to think that priorities are the best option. write can be totally blocked temporarily by a write-heavy region Key: HBASE-9467 URL: https://issues.apache.org/jira/browse/HBASE-9467 Project: HBase Issue Type: Improvement Reporter: Feng Honghua Priority: Minor Write to a region can be blocked temporarily if the memstore of that region reaches the threshold(hbase.hregion.memstore.block.multiplier * hbase.hregion.flush.size) until the memstore of that region is flushed. For a write-heavy region, if its write requests saturates all the handler threads of that RS when write blocking for that region occurs, requests of other regions/tables to that RS also can't be served due to no available handler threads...until the pending writes of that write-heavy region are served after the flush is done. Hence during this time period, from the RS perspective it can't serve any request from any table/region just due to a single write-heavy region. This sounds not very reasonable, right? Maybe write requests from a region can only be served by a sub-set of the handler threads, and then write blocking of any single region can't lead to the scenario mentioned above? Comment? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9471) htrace synchronized on getInstance
[ https://issues.apache.org/jira/browse/HBASE-9471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762031#comment-13762031 ] Elliott Clark commented on HBASE-9471: -- Commented on github. There's also a test failure that we've seen in unit tests. Hopefully we can get these two issues fixed in time for the next rc. htrace synchronized on getInstance -- Key: HBASE-9471 URL: https://issues.apache.org/jira/browse/HBASE-9471 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor When doing tests on cached data, one of the bottleneck is the getInstance() on HTrace, called in RequestContext#set() -- Trace.isTracing() When it's fixed, we see threads blocked in sendResponse and in the metrics (with hadoop 1). The difference is not huge (it's in the range 0-5%), but there is no reason to keep this. I'm sending a pull request to htrace. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7690) Improve metadata printing in HFilePrettyPrinter
[ https://issues.apache.org/jira/browse/HBASE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-7690: Status: Open (was: Patch Available) Patch is long stale. Improve metadata printing in HFilePrettyPrinter --- Key: HBASE-7690 URL: https://issues.apache.org/jira/browse/HBASE-7690 Project: HBase Issue Type: Improvement Components: HFile Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Attachments: 0001-HBASE-7690-Improve-printing-of-HFile-metadata.patch, 0001-HBASE-7690-Improve-printing-of-HFile-metadata.patch The pretty printer could do a better job with metadata. For example: {noformat} ... Fileinfo: BULKLOAD_SOURCE_TASK = attempt_201301272014_0001_r_00_0 BULKLOAD_TIMESTAMP = \x00\x00\x01\x7FcG\x8E DELETE_FAMILY_COUNT = \x00\x00\x00\x00\x00\x00\x00\x00 EARLIEST_PUT_TS = \x00\x00\x01\x7FcF EXCLUDE_FROM_MINOR_COMPACTION = \x00 KEY_VALUE_VERSION = \x00\x00\x00\x01 MAJOR_COMPACTION_KEY = \xFF MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x00 TIMERANGE = 13593468698301359346869830 hfile.AVG_KEY_LEN = 19 hfile.AVG_VALUE_LEN = 2 hfile.LASTKEY = \x00\x04row9\x01dc2\x00\x00\x01\x7FcF\x04 ... {noformat} May of these fields could be cleaned up to print in human-readable values. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase
[ https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762049#comment-13762049 ] Lars Hofhansl commented on HBASE-5954: -- Not sure on which machine I ran this now. I can redo. On my work machine I have 4 disks in RAID10. Allow proper fsync support for HBase Key: HBASE-5954 URL: https://issues.apache.org/jira/browse/HBASE-5954 Project: HBase Issue Type: Improvement Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Critical Fix For: 0.98.0 Attachments: 5954-trunk-hdfs-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, hbase-hdfs-744.txt At least get recommendation into 0.96 doc and some numbers running w/ this hdfs feature enabled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9466) Read-only mode
[ https://issues.apache.org/jira/browse/HBASE-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762025#comment-13762025 ] Nick Dimiduk commented on HBASE-9466: - More that just filtering with Kerberos, if the RS knowns a Region is in RO mode, it can skip initialization of write-path components to save resources. This would also allow the RO regions to be served by all three favored notes. (cc [~enis] [~devaraj]) Read-only mode -- Key: HBASE-9466 URL: https://issues.apache.org/jira/browse/HBASE-9466 Project: HBase Issue Type: New Feature Reporter: Feng Honghua Priority: Minor Can we provide a read-only mode for a table? write to the table in read-only mode will be rejected, but read-only mode is different from disable in that: 1. it doesn't offline the regions of the table(hence much more lightweight than disable) 2. it can serve read requests Comments? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region
[ https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762027#comment-13762027 ] Nick Dimiduk commented on HBASE-9467: - See also HBASE-8836. Dupe? write can be totally blocked temporarily by a write-heavy region Key: HBASE-9467 URL: https://issues.apache.org/jira/browse/HBASE-9467 Project: HBase Issue Type: Improvement Reporter: Feng Honghua Priority: Minor Write to a region can be blocked temporarily if the memstore of that region reaches the threshold(hbase.hregion.memstore.block.multiplier * hbase.hregion.flush.size) until the memstore of that region is flushed. For a write-heavy region, if its write requests saturates all the handler threads of that RS when write blocking for that region occurs, requests of other regions/tables to that RS also can't be served due to no available handler threads...until the pending writes of that write-heavy region are served after the flush is done. Hence during this time period, from the RS perspective it can't serve any request from any table/region just due to a single write-heavy region. This sounds not very reasonable, right? Maybe write requests from a region can only be served by a sub-set of the handler threads, and then write blocking of any single region can't lead to the scenario mentioned above? Comment? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762099#comment-13762099 ] Jean-Daniel Cryans commented on HBASE-9462: --- I agree with JM that returning false is weird and gives the wrong impression. HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9465) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster
[ https://issues.apache.org/jira/browse/HBASE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762109#comment-13762109 ] Lars Hofhansl commented on HBASE-9465: -- That is a known issue and hard to fix. There is an option to setup a special TTL for Deletes, in order to keep them around longer (hbase.hstore.time.to.purge.deletes), but that's somewhat of a hack. HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster -- Key: HBASE-9465 URL: https://issues.apache.org/jira/browse/HBASE-9465 Project: HBase Issue Type: Bug Components: regionserver, Replication Reporter: Feng Honghua When region-move or RS failure occurs in master cluster, the hlog entries that are not pushed before region-move or RS-failure will be pushed by original RS(for region move) or another RS which takes over the remained hlog of dead RS(for RS failure), and the new entries for the same region(s) will be pushed by the RS which now serves the region(s), but they push the hlog entries of a same region concurrently without coordination. This treatment can possibly lead to data inconsistency between master and peer clusters: 1. there are put and then delete written to master cluster 2. due to region-move / RS-failure, they are pushed by different replication-source threads to peer cluster 3. if delete is pushed to peer cluster before put, and flush and major-compact occurs in peer cluster before put is pushed to peer cluster, the delete is collected and the put remains in peer cluster In this scenario, the put remains in peer cluster, but in master cluster the put is masked by the delete, hence data inconsistency between master and peer clusters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9472) If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size
[ https://issues.apache.org/jira/browse/HBASE-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762108#comment-13762108 ] Anoop Sam John commented on HBASE-9472: --- Are you doing bulk load of data and that is why you almost dont need the memstore? If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size --- Key: HBASE-9472 URL: https://issues.apache.org/jira/browse/HBASE-9472 Project: HBase Issue Type: Bug Affects Versions: 0.94.5 Reporter: rahul gidwani In HbaseConfiguration.checkForClusterFreeMemoryLimit it does a check to see if the blockCache + memstore .8 this threshold ensures we do not run out of memory. But MemStoreFlusher.getMemStoreLimit does this check: {code} if (limit = 0.9f || limit 0.1f) { LOG.warn(Setting global memstore limit to default of + defaultLimit + because supplied value outside allowed range of 0.1 - 0.9); effectiveLimit = defaultLimit; } {code} In our cluster we had the block cache set to an upper limit of 0.76 and the memstore upper limit was set to 0.04. We noticed the memstore size was exceeding the limit we had set and after looking at the getMemStoreLimit code it seems that the memstore upper limit is sized to the default value if the configuration value is less than .1 or greater than .9. This now makes the block cache and memstore greater than our available heap. We can remove the check for the greater than 90% of the heap as this can never happen due to the check in HbaseConfiguration.checkForClusterFreeMemoryLimit() This check doesn't seem necessary anymore as we have the HbaseConfiguration class checking for the cluster free limit. Am I correct in this assumption? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9465) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster
[ https://issues.apache.org/jira/browse/HBASE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762111#comment-13762111 ] Lars Hofhansl commented on HBASE-9465: -- And what J-D said :) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster -- Key: HBASE-9465 URL: https://issues.apache.org/jira/browse/HBASE-9465 Project: HBase Issue Type: Bug Components: regionserver, Replication Reporter: Feng Honghua When region-move or RS failure occurs in master cluster, the hlog entries that are not pushed before region-move or RS-failure will be pushed by original RS(for region move) or another RS which takes over the remained hlog of dead RS(for RS failure), and the new entries for the same region(s) will be pushed by the RS which now serves the region(s), but they push the hlog entries of a same region concurrently without coordination. This treatment can possibly lead to data inconsistency between master and peer clusters: 1. there are put and then delete written to master cluster 2. due to region-move / RS-failure, they are pushed by different replication-source threads to peer cluster 3. if delete is pushed to peer cluster before put, and flush and major-compact occurs in peer cluster before put is pushed to peer cluster, the delete is collected and the put remains in peer cluster In this scenario, the put remains in peer cluster, but in master cluster the put is masked by the delete, hence data inconsistency between master and peer clusters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns
[ https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-8930: - Fix Version/s: (was: 0.94.12) 0.94.13 Filter evaluates KVs outside requested columns -- Key: HBASE-8930 URL: https://issues.apache.org/jira/browse/HBASE-8930 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.94.7 Reporter: Federico Gaule Assignee: Vasu Mariyala Priority: Critical Labels: filters, hbase, keyvalue Fix For: 0.98.0, 0.94.13, 0.96.1 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, HBASE-8930-rev4.patch, HBASE-8930-rev5.patch 1- Fill row with some columns 2- Get row with some columns less than universe - Use filter to print kvs 3- Filter prints not requested columns Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL and prints KV's qualifier SUFFIX_0 = 0 SUFFIX_1 = 1 SUFFIX_4 = 4 SUFFIX_6 = 6 P= Persisted R= Requested E= Evaluated X= Returned | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 | 5606 |... | | P | P | | | P | P | | | P | P | |... | | R | R | R | | R | R | R | | | | |... | | E | E | | | E | E | | | {color:red}E{color} | | |... | | X | X | | | X | X | | | | | | {code:title=ExtraColumnTest.java|borderStyle=solid} @Test public void testFilter() throws Exception { Configuration config = HBaseConfiguration.create(); config.set(hbase.zookeeper.quorum, myZK); HTable hTable = new HTable(config, testTable); byte[] cf = Bytes.toBytes(cf); byte[] row = Bytes.toBytes(row); byte[] col1 = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_1)); byte[] col2 = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_1)); byte[] col3 = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_1)); byte[] col4 = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_1)); byte[] col5 = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_1)); byte[] col6 = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_1)); byte[] col1g = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_6)); byte[] col2g = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_6)); byte[] col1v = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_4)); byte[] col2v = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_4)); byte[] col3v = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_4)); byte[] col4v = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_4)); byte[] col5v = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_4)); byte[] col6v = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_4)); // === INSERTION =// Put put = new Put(row); put.add(cf, col1, Bytes.toBytes((short) 1)); put.add(cf, col2, Bytes.toBytes((short) 1)); put.add(cf, col3, Bytes.toBytes((short) 3)); put.add(cf, col4, Bytes.toBytes((short) 3)); put.add(cf, col5, Bytes.toBytes((short) 3)); put.add(cf, col6, Bytes.toBytes((short) 3)); hTable.put(put); put = new Put(row); put.add(cf, col1v, Bytes.toBytes((short) 10)); put.add(cf, col2v, Bytes.toBytes((short) 10)); put.add(cf, col3v, Bytes.toBytes((short) 10)); put.add(cf, col4v, Bytes.toBytes((short) 10)); put.add(cf, col5v, Bytes.toBytes((short) 10)); put.add(cf, col6v, Bytes.toBytes((short) 10)); hTable.put(put); hTable.flushCommits(); //==READING=// Filter allwaysNextColFilter = new AllwaysNextColFilter(); Get get = new Get(row); get.addColumn(cf, col1); //5581 get.addColumn(cf, col1v); //5584
[jira] [Commented] (HBASE-8534) Fix coverage for org.apache.hadoop.hbase.mapreduce
[ https://issues.apache.org/jira/browse/HBASE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762075#comment-13762075 ] Ivan A. Veselovsky commented on HBASE-8534: --- Hi, Nick, I will be responsible for this in place of Alexey G. As far as I understand, a separate issue was created to backport this fix to 0.94 branch. Initially all these coverage fixes were intended to go to 3 branches: trunk, 0.95, 0.94. So, what about backporting of this change to 0.95 also? Currently I see it in trunk only: {code} $ cd ../hbase-0.94/ svn log . | grep -C 2 -- '-8534' $ cd ../hbase-0.95/ $ svn log . | grep -C 2 -- '-8534' $ cd ../hbase-trunk/ $ svn log . | grep -C 2 -- '-8534' r1489878 | tedyu | 2013-06-05 17:56:49 +0400 (Wed, 05 Jun 2013) | 3 lines HBASE-8534 addendum removes TestDriver (Nick Dimiduk) -- r1488542 | tedyu | 2013-06-01 20:23:55 +0400 (Sat, 01 Jun 2013) | 3 lines HBASE-8534 Fix coverage for org.apache.hadoop.hbase.mapreduce (Aleksey Gorshkov) {code} Fix coverage for org.apache.hadoop.hbase.mapreduce -- Key: HBASE-8534 URL: https://issues.apache.org/jira/browse/HBASE-8534 Project: HBase Issue Type: Test Components: mapreduce, test Affects Versions: 0.94.8, 0.95.2 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Fix For: 0.98.0 Attachments: 0001-HBASE-8534-hadoop2-addendum.patch, 8534-trunk-h.patch, HBASE-8534-0.94-d.patch, HBASE-8534-0.94-e.patch, HBASE-8534-0.94-f.patch, HBASE-8534-0.94-g.patch, HBASE-8534-0.94.patch, HBASE-8534-trunk-a.patch, HBASE-8534-trunk-b.patch, HBASE-8534-trunk-c.patch, HBASE-8534-trunk-d.patch, HBASE-8534-trunk-e.patch, HBASE-8534-trunk-f.patch, HBASE-8534-trunk-g.patch, HBASE-8534-trunk.patch fix coverage org.apache.hadoop.hbase.mapreduce patch HBASE-8534-0.94.patch for branch-0.94 patch HBASE-8534-trunk.patch for branch-0.95 and trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762102#comment-13762102 ] Nicolas Liochon commented on HBASE-9334: Ok to discuss it in HBASE-9359 :-) (I came here because it was referenced in the commit). Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch, hbase-9334.v3.patch, hbase-9334.v4.patch, hbase-9334.v6.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762113#comment-13762113 ] Jonathan Hsieh commented on HBASE-9359: --- Originally came from discussion on HBASE-9334 but is mostly relevant to the HBASE-9359 part of the commit. [~nkeywal]: {quote} I may miss something obvious, but I understand that the goal of this change is to require a recompile but not modification of the client app (Recompile of client apps likely needed after this change.). But with this change: public KeyValue[] raw() { === public Cell[] raw() { A client which was calling 'raw' must now be changed to use 'Cell', no? Incidentally, it seems not possible to write a client that would work with 2 versions of HBase (i.e. modifying the client, but beeing able to compile the modified client with a previous version of HBase). I'm having the issue because I port the ycsb benchmark. It depends on raw. Lastly, audience parameter for 'cell' is private, but as it appears in a public interface I think it should be public... {quote} Good catch on the InterfaceAudience for Cell -- that should be updated. I'll file and commit that. The release notes here highlight the minor changes need to be done to applications -- change KeyValue to Cell, change ListKeyValue to ListCell. [~sershe]: {quote} +1 on fixing to not break compat... this is not the first time recently something was broken like that (getFamilyMap, then HBASE_CLASSPATH), can we use normal deprecation route to avoid breaking things. Jonathan Hsieh what do you think should be done? Would the above patch be easy to fix up? {quote} I think adding parts of the old api back is possible but it will incur some non-trivial performance cost -- generally we'll need to use KeyValueUtil.ensureKeyValue in many places and will also need to make copy conversions of ListKeyValue to ListCell and KeyValue[] to Cell[]. See some of the gymnastics in place for Coprocs and Filters. We've been updating apps/systems dependent on hbase (some flume connectors, hive) and it has been annoying but straight forward. There are several cases already where we have broken compat where we are not going to be able to restore the old api (some were due to writable-protobuf conversion such as HBASE-7215). In this patch I've added some of the most popular convenience methods to Cell as deprecated to minimize pain (#getRow, #getQualifier, #getFamily, #getValue). I think adding some of the other more popular ones is reasonable but adding everythign back is not. (a perf degraded #raw seems like a candidate now, as well as a rename of the interface to #rawCells()). Did ycsb encounter any other conversion pains? Other suggestions? Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9465) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster
[ https://issues.apache.org/jira/browse/HBASE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762105#comment-13762105 ] Jean-Daniel Cryans commented on HBASE-9465: --- Same comment as on HBASE-9469: [~lhofhansl] has a blog post that covers this: http://hadoop-hbase.blogspot.com/2012/01/replication-for-ha-and-dr.html Basically you need to enable KEEP_DELETED_CELLS on your families. I have a draft for a new piece of documentation that we could add to the ref guide that I should probably contribute :) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster -- Key: HBASE-9465 URL: https://issues.apache.org/jira/browse/HBASE-9465 Project: HBase Issue Type: Bug Components: regionserver, Replication Reporter: Feng Honghua When region-move or RS failure occurs in master cluster, the hlog entries that are not pushed before region-move or RS-failure will be pushed by original RS(for region move) or another RS which takes over the remained hlog of dead RS(for RS failure), and the new entries for the same region(s) will be pushed by the RS which now serves the region(s), but they push the hlog entries of a same region concurrently without coordination. This treatment can possibly lead to data inconsistency between master and peer clusters: 1. there are put and then delete written to master cluster 2. due to region-move / RS-failure, they are pushed by different replication-source threads to peer cluster 3. if delete is pushed to peer cluster before put, and flush and major-compact occurs in peer cluster before put is pushed to peer cluster, the delete is collected and the put remains in peer cluster In this scenario, the put remains in peer cluster, but in master cluster the put is masked by the delete, hence data inconsistency between master and peer clusters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762083#comment-13762083 ] Ted Yu commented on HBASE-9462: --- From TestAdmin: {code} * @throws IOException if a remote or network exception occurs */ public boolean isTableEnabled(TableName tableName) throws IOException { {code} I think return of false is better. HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762081#comment-13762081 ] Hadoop QA commented on HBASE-9343: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602157/HBASE-9343_trunk.01.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 16 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//console This message is automatically generated. Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, HBASE-9343_trunk.01.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is
[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns
[ https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762078#comment-13762078 ] Lars Hofhansl commented on HBASE-8930: -- Reverted from all branches. Sigh. Filter evaluates KVs outside requested columns -- Key: HBASE-8930 URL: https://issues.apache.org/jira/browse/HBASE-8930 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.94.7 Reporter: Federico Gaule Assignee: Vasu Mariyala Priority: Critical Labels: filters, hbase, keyvalue Fix For: 0.98.0, 0.94.12, 0.96.1 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, HBASE-8930-rev4.patch, HBASE-8930-rev5.patch 1- Fill row with some columns 2- Get row with some columns less than universe - Use filter to print kvs 3- Filter prints not requested columns Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL and prints KV's qualifier SUFFIX_0 = 0 SUFFIX_1 = 1 SUFFIX_4 = 4 SUFFIX_6 = 6 P= Persisted R= Requested E= Evaluated X= Returned | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 | 5606 |... | | P | P | | | P | P | | | P | P | |... | | R | R | R | | R | R | R | | | | |... | | E | E | | | E | E | | | {color:red}E{color} | | |... | | X | X | | | X | X | | | | | | {code:title=ExtraColumnTest.java|borderStyle=solid} @Test public void testFilter() throws Exception { Configuration config = HBaseConfiguration.create(); config.set(hbase.zookeeper.quorum, myZK); HTable hTable = new HTable(config, testTable); byte[] cf = Bytes.toBytes(cf); byte[] row = Bytes.toBytes(row); byte[] col1 = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_1)); byte[] col2 = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_1)); byte[] col3 = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_1)); byte[] col4 = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_1)); byte[] col5 = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_1)); byte[] col6 = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_1)); byte[] col1g = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_6)); byte[] col2g = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_6)); byte[] col1v = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_4)); byte[] col2v = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_4)); byte[] col3v = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_4)); byte[] col4v = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_4)); byte[] col5v = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_4)); byte[] col6v = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_4)); // === INSERTION =// Put put = new Put(row); put.add(cf, col1, Bytes.toBytes((short) 1)); put.add(cf, col2, Bytes.toBytes((short) 1)); put.add(cf, col3, Bytes.toBytes((short) 3)); put.add(cf, col4, Bytes.toBytes((short) 3)); put.add(cf, col5, Bytes.toBytes((short) 3)); put.add(cf, col6, Bytes.toBytes((short) 3)); hTable.put(put); put = new Put(row); put.add(cf, col1v, Bytes.toBytes((short) 10)); put.add(cf, col2v, Bytes.toBytes((short) 10)); put.add(cf, col3v, Bytes.toBytes((short) 10)); put.add(cf, col4v, Bytes.toBytes((short) 10)); put.add(cf, col5v, Bytes.toBytes((short) 10)); put.add(cf, col6v, Bytes.toBytes((short) 10)); hTable.put(put); hTable.flushCommits(); //==READING=// Filter allwaysNextColFilter = new AllwaysNextColFilter(); Get get = new Get(row); get.addColumn(cf, col1); //5581
[jira] [Commented] (HBASE-9472) If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size
[ https://issues.apache.org/jira/browse/HBASE-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762117#comment-13762117 ] rahul gidwani commented on HBASE-9472: -- We have a 20GB heap for these regionservers which are used for a cluster which is primarily doing random reads. We are trying to hold everything in the block cache. We don't do a lot of writes in comparison to reads in the steady state. When we ran a job to write quite a few entries to this cluster we noticed the memstore was greater than the expected size: of .8GB. With respect to the configuration, our expected block cache upper limit is ~ 15GB and our memstore is ~ .8GB If the memstore is not allowed to be less than .1, we use the default value which is .4, combined our blockcache + memstore upper limits are: 23GB which is larger than our total heap of 20GB. If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size --- Key: HBASE-9472 URL: https://issues.apache.org/jira/browse/HBASE-9472 Project: HBase Issue Type: Bug Affects Versions: 0.94.5 Reporter: rahul gidwani In HbaseConfiguration.checkForClusterFreeMemoryLimit it does a check to see if the blockCache + memstore .8 this threshold ensures we do not run out of memory. But MemStoreFlusher.getMemStoreLimit does this check: {code} if (limit = 0.9f || limit 0.1f) { LOG.warn(Setting global memstore limit to default of + defaultLimit + because supplied value outside allowed range of 0.1 - 0.9); effectiveLimit = defaultLimit; } {code} In our cluster we had the block cache set to an upper limit of 0.76 and the memstore upper limit was set to 0.04. We noticed the memstore size was exceeding the limit we had set and after looking at the getMemStoreLimit code it seems that the memstore upper limit is sized to the default value if the configuration value is less than .1 or greater than .9. This now makes the block cache and memstore greater than our available heap. We can remove the check for the greater than 90% of the heap as this can never happen due to the check in HbaseConfiguration.checkForClusterFreeMemoryLimit() This check doesn't seem necessary anymore as we have the HbaseConfiguration class checking for the cluster free limit. Am I correct in this assumption? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9472) If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size
[ https://issues.apache.org/jira/browse/HBASE-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762084#comment-13762084 ] Vladimir Rodionov commented on HBASE-9472: -- Just a note. Memstore size of 0.04 is not a good idea. You will end up creating a lot of tiny flush files and compaction will be running around the clock. If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size --- Key: HBASE-9472 URL: https://issues.apache.org/jira/browse/HBASE-9472 Project: HBase Issue Type: Bug Affects Versions: 0.94.5 Reporter: rahul gidwani In HbaseConfiguration.checkForClusterFreeMemoryLimit it does a check to see if the blockCache + memstore .8 this threshold ensures we do not run out of memory. But MemStoreFlusher.getMemStoreLimit does this check: {code} if (limit = 0.9f || limit 0.1f) { LOG.warn(Setting global memstore limit to default of + defaultLimit + because supplied value outside allowed range of 0.1 - 0.9); effectiveLimit = defaultLimit; } {code} In our cluster we had the block cache set to an upper limit of 0.76 and the memstore upper limit was set to 0.04. We noticed the memstore size was exceeding the limit we had set and after looking at the getMemStoreLimit code it seems that the memstore upper limit is sized to the default value if the configuration value is less than .1 or greater than .9. This now makes the block cache and memstore greater than our available heap. We can remove the check for the greater than 90% of the heap as this can never happen due to the check in HbaseConfiguration.checkForClusterFreeMemoryLimit() This check doesn't seem necessary anymore as we have the HbaseConfiguration class checking for the cluster free limit. Am I correct in this assumption? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9458) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure
[ https://issues.apache.org/jira/browse/HBASE-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761992#comment-13761992 ] Matteo Bertozzi commented on HBASE-9458: +1, looks good to me... could you just add add the exception to the LOG.warn(Got CorruptedSnapshotException).. just to been able to verify the output log even if the test succeed. (no need for me to post another patch, since is just a , e you can probably do that on commit) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure --- Key: HBASE-9458 URL: https://issues.apache.org/jira/browse/HBASE-9458 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Attachments: 9458-v1.txt From https://builds.apache.org/job/HBase-0.96/20/testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/ : {code} org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:221) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:125) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3120) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2672) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2605) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2612) at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:336) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365) at org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2947) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851) Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via Failed taking snapshot { ss=snapshotAfterMerge table=test type=FLUSH } due to exception:Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:85) at
[jira] [Updated] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9347: Attachment: HBASE-9347_trunk.01.patch Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9471) htrace synchronized on getInstance
Nicolas Liochon created HBASE-9471: -- Summary: htrace synchronized on getInstance Key: HBASE-9471 URL: https://issues.apache.org/jira/browse/HBASE-9471 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor When doing tests on cached data, one of the bottleneck is the getInstance() on HTrace, called in RequestContext#set() -- Trace.isTracing() When it's fixed, we see threads blocked in sendResponse and in the metrics (with hadoop 1). The difference is not huge (it's in the range 0-5%), but there is no reason to keep this. I'm sending a pull request to htrace. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762088#comment-13762088 ] Jean-Marc Spaggiari commented on HBASE-9462: But that might give the wrong information, no? Like: if isTableEnabled = false then table.enable then we will try to enable the table but it doesn't exist? I agree that it's a corner case. We should not even call isTableEnable call on a table which doesn't exist, expect if one process is removing it while another one is trying to access it... HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9466) Read-only mode
[ https://issues.apache.org/jira/browse/HBASE-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762095#comment-13762095 ] Jean-Daniel Cryans commented on HBASE-9466: --- How would that compare to the current feature where you can set a table to be read-only? https://github.com/apache/hbase/blob/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java#L566 Read-only mode -- Key: HBASE-9466 URL: https://issues.apache.org/jira/browse/HBASE-9466 Project: HBase Issue Type: New Feature Reporter: Feng Honghua Priority: Minor Can we provide a read-only mode for a table? write to the table in read-only mode will be rejected, but read-only mode is different from disable in that: 1. it doesn't offline the regions of the table(hence much more lightweight than disable) 2. it can serve read requests Comments? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762006#comment-13762006 ] Andrew Purtell commented on HBASE-9347: --- bq. Since it's now entirely configurable, the GzipFilter should be moved out into {{hbase-default.xml}}, giving the user full flexibility in overriding it if they desire. +1 Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762093#comment-13762093 ] Hudson commented on HBASE-9436: --- SUCCESS: Integrated in HBase-TRUNK #4479 (See [https://builds.apache.org/job/HBase-TRUNK/4479/]) HBASE-9436 hbase.regionserver.handler.count default (nkeywal: rev 1521166) * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-server/src/test/resources/hbase-site.xml * /hbase/trunk/src/main/docbkx/configuration.xml hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9334) Convert KeyValue to Cell in hbase-client module - Filters
[ https://issues.apache.org/jira/browse/HBASE-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762096#comment-13762096 ] Jonathan Hsieh commented on HBASE-9334: --- The main goal here is to do the onerous but necessary task of narrowing the API. Today we expose too many internals that will hinder our ability to do optimizations (like pushing encodings data out the client for them to interpret instead of having the rs interpret and then re-encode ship and then having the client interpret again). [~nkeywal] The concerns here are seem to be on the HBASE-9359 parts? I think the release notes for filter part is accurate when the filter part was a standalone patch (I ended up committing it together with HBASE-9359 because it had a compile problem.). Let's move this to HBASE-9359? Convert KeyValue to Cell in hbase-client module - Filters - Key: HBASE-9334 URL: https://issues.apache.org/jira/browse/HBASE-9334 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-9334.patch, hbase-9334.v2.patch, hbase-9334.v3.patch, hbase-9334.v4.patch, hbase-9334.v6.patch The goal is is to remove KeyValue from the publicly exposed API and require clients to use the cleaner mroe encapsulated Cell API instead. For filters, this affects #filterKeyValue, #transform, #filterrow, and #getNextKeyHint. Since Cell is a base interface for KeyValue, changing these means that 0.94 apps may need a recompile but probably no modifications. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7690) Improve metadata printing in HFilePrettyPrinter
[ https://issues.apache.org/jira/browse/HBASE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762097#comment-13762097 ] Elliott Clark commented on HBASE-7690: -- +1 Improve metadata printing in HFilePrettyPrinter --- Key: HBASE-7690 URL: https://issues.apache.org/jira/browse/HBASE-7690 Project: HBase Issue Type: Improvement Components: HFile Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Attachments: 0001-HBASE-7690-Improve-printing-of-HFile-metadata.patch, 0001-HBASE-7690-Improve-printing-of-HFile-metadata.patch The pretty printer could do a better job with metadata. For example: {noformat} ... Fileinfo: BULKLOAD_SOURCE_TASK = attempt_201301272014_0001_r_00_0 BULKLOAD_TIMESTAMP = \x00\x00\x01\x7FcG\x8E DELETE_FAMILY_COUNT = \x00\x00\x00\x00\x00\x00\x00\x00 EARLIEST_PUT_TS = \x00\x00\x01\x7FcF EXCLUDE_FROM_MINOR_COMPACTION = \x00 KEY_VALUE_VERSION = \x00\x00\x00\x01 MAJOR_COMPACTION_KEY = \xFF MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x00 TIMERANGE = 13593468698301359346869830 hfile.AVG_KEY_LEN = 19 hfile.AVG_VALUE_LEN = 2 hfile.LASTKEY = \x00\x04row9\x01dc2\x00\x00\x01\x7FcF\x04 ... {noformat} May of these fields could be cleaned up to print in human-readable values. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762123#comment-13762123 ] Hadoop QA commented on HBASE-9347: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602177/HBASE-9347_trunk.01.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7094//console This message is automatically generated. Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9458) Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure
[ https://issues.apache.org/jira/browse/HBASE-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9458: -- Attachment: 9458-v2.txt Intermittent TestFlushSnapshotFromClient#testTakeSnapshotAfterMerge failure --- Key: HBASE-9458 URL: https://issues.apache.org/jira/browse/HBASE-9458 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu Attachments: 9458-v1.txt, 9458-v2.txt From https://builds.apache.org/job/HBase-0.96/20/testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/ : {code} org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:221) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:125) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3120) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2672) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2605) at org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2612) at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:336) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=snapshotAfterMerge table=test type=FLUSH } had an error. Procedure snapshotAfterMerge { waiting=[] done=[] } at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365) at org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2947) at org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2146) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1851) Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via Failed taking snapshot { ss=snapshotAfterMerge table=test type=FLUSH } due to exception:Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 30b951996ef34885a2f5d64e4acb2467.6d1ed72bf95759cb606e8a6efdc6908e at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:85) at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:318) at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:355) ... 4 more Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for:
[jira] [Updated] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0
[ https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-9338: - Attachment: (was: HBASE-HBASE-9338-TESTING.patch) Test Big Linked List fails on Hadoop 2.1.0 -- Key: HBASE-9338 URL: https://issues.apache.org/jira/browse/HBASE-9338 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9338-TESTING-2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0
[ https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762125#comment-13762125 ] Elliott Clark edited comment on HBASE-9338 at 9/9/13 6:25 PM: -- Here's the patch that I used to search for missed row keys. The row keys in the RowLogSearchJob are the ones that were referenced but not found. I found the keys in 5 different wal logs. (so it doesn't seem like it's an issue with reading the wal log) This was then checked like this: {code} hbase@a1805:/home/eclark$ hbase org.apache.hadoop.hbase.regionserver.wal.HLogPrettyPrinter -w \\x8E\\xF3\\xE1f\\x1Al\\x89d\\xD3\\xC1w5\\x9B\\x8FN hdfs://a1805.halxg.cloudera.com:8020/hbase/oldWALs/a1809.halxg.cloudera.com%2C60020%2C1378540278223.1378540795093 Row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/jenkins-hbase-custom-branch-34/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.1.0-beta/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 2013-09-09 11:19:43,435 WARN [main] conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS Sequence 2846225 from region 8b3a9c2a567f1eca40d25859e9e56fa2 in table IntegrationTestBigLinkedList at write timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:prev timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:count timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:client timestamp: Sat Sep 07 00:59:58 PDT 2013 {code} was (Author: eclark): Here's the patch that I used to search for missed row keys. The row keys in the RowLogSearchJob are the ones that were referenced but not found. I found the keys in 5 different wal logs. (os it doesn't seem like it's an issue with reading the wal log) This was then checked like this: {code} hbase@a1805:/home/eclark$ hbase org.apache.hadoop.hbase.regionserver.wal.HLogPrettyPrinter -w \\x8E\\xF3\\xE1f\\x1Al\\x89d\\xD3\\xC1w5\\x9B\\x8FN hdfs://a1805.halxg.cloudera.com:8020/hbase/oldWALs/a1809.halxg.cloudera.com%2C60020%2C1378540278223.1378540795093 Row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/jenkins-hbase-custom-branch-34/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.1.0-beta/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 2013-09-09 11:19:43,435 WARN [main] conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS Sequence 2846225 from region 8b3a9c2a567f1eca40d25859e9e56fa2 in table IntegrationTestBigLinkedList at write timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:prev timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:count timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:client timestamp: Sat Sep 07 00:59:58 PDT 2013 {code} Test Big Linked List fails on Hadoop 2.1.0 -- Key: HBASE-9338 URL: https://issues.apache.org/jira/browse/HBASE-9338 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9338-TESTING-2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9440) Pass blocks of KVs from HFile scanner to the StoreFileScanner and up
[ https://issues.apache.org/jira/browse/HBASE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762058#comment-13762058 ] Lars Hofhansl commented on HBASE-9440: -- I have not thought about this yet. Ideally all the next(...) methods on the scanners (at least StoreScanner and StoreFileScanner) would have a version that return a sorted KeyValue[]. HFileScanner is a bit weird in that you have to call next() and then call getKeyValue to get the current KV, but if StoreFileScanner could just call this repeatedly and pass a block up, that would be good enough. Next: Test HFileScanner.next followed by getKeyValue() directly, to see what the expected maximum throughput should be. Pass blocks of KVs from HFile scanner to the StoreFileScanner and up Key: HBASE-9440 URL: https://issues.apache.org/jira/browse/HBASE-9440 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Currently we read KVs from an HFileScanner one-by-one and pass them up the scanner/heap tree. Many time the ranges of KVs retrieved from StoreFileScanner (by StoreScanners) and HFileScanner (by StoreFileScanner) will be non-overlapping. If chunks of KVs do not overlap we can sort entire chunks just by comparing the start/end key of the chunk. Only if chunks are overlapping do we need to sort KV by KV as we do now. I have no patch, but I wanted to float this idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9462: -- Attachment: 9462-v2.patch HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch, 9462-v2.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0
[ https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-9338: - Attachment: HBASE-9338-TESTING-2.patch Here's the patch that I used to search for missed row keys. The row keys in the RowLogSearchJob are the ones that were referenced but not found. I found the keys in 5 different wal logs. (os it doesn't seem like it's an issue with reading the wal log) This was then checked like this: {code} hbase@a1805:/home/eclark$ hbase org.apache.hadoop.hbase.regionserver.wal.HLogPrettyPrinter -w \\x8E\\xF3\\xE1f\\x1Al\\x89d\\xD3\\xC1w5\\x9B\\x8FN hdfs://a1805.halxg.cloudera.com:8020/hbase/oldWALs/a1809.halxg.cloudera.com%2C60020%2C1378540278223.1378540795093 Row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/jenkins-hbase-custom-branch-34/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.1.0-beta/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 2013-09-09 11:19:43,435 WARN [main] conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS Sequence 2846225 from region 8b3a9c2a567f1eca40d25859e9e56fa2 in table IntegrationTestBigLinkedList at write timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:prev timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:count timestamp: Sat Sep 07 00:59:58 PDT 2013 Action: row: \x8E\xF3\xE1f\x1Al\x89d\xD3\xC1w5\x9B\x8FN column: meta:client timestamp: Sat Sep 07 00:59:58 PDT 2013 {code} Test Big Linked List fails on Hadoop 2.1.0 -- Key: HBASE-9338 URL: https://issues.apache.org/jira/browse/HBASE-9338 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-9338-TESTING-2.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9462: -- Attachment: 9462.patch HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-9462: - Assignee: Ted Yu HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9452) Simplify the configuration of the multicast notifier
[ https://issues.apache.org/jira/browse/HBASE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762092#comment-13762092 ] Hudson commented on HBASE-9452: --- SUCCESS: Integrated in HBase-TRUNK #4479 (See [https://builds.apache.org/job/HBase-TRUNK/4479/]) HBASE-9452 Simplify the configuration of the multicast notifier (nkeywal: rev 1520999) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java Simplify the configuration of the multicast notifier Key: HBASE-9452 URL: https://issues.apache.org/jira/browse/HBASE-9452 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.98.0, 0.96.0 Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9452.v1.patch As JD pointed it out, we not consistent in the naming. As well, it could be simpler to make it run. patch for trunk, but I would like to put it in the next 0.96 rc as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9469) Synchronous replication
[ https://issues.apache.org/jira/browse/HBASE-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762106#comment-13762106 ] Jean-Daniel Cryans commented on HBASE-9469: --- [~lhofhansl] has a blog post that covers this: http://hadoop-hbase.blogspot.com/2012/01/replication-for-ha-and-dr.html Basically you need to enable KEEP_DELETED_CELLS on your families. I have a draft for a new piece of documentation that we could add to the ref guide that I should probably contribute :) Synchronous replication is still a feature we could add though. Synchronous replication --- Key: HBASE-9469 URL: https://issues.apache.org/jira/browse/HBASE-9469 Project: HBase Issue Type: New Feature Reporter: Feng Honghua Priority: Minor Scenario: A/B clusters with master-master replication, client writes to A cluster and A pushes all writes to B cluster, and when A cluster is down, client switches writing to B cluster. But the client's write switch is unsafe due to the replication between A/B is asynchronous: a delete to B cluster which aims to delete a put written earlier can fail due to that put is written to A cluster and isn't successfully pushed to B before A is down. It can be worse if this delete is collected(flush and then major compact occurs) before A cluster is up and that put is eventually pushed to B, the put won't ever be deleted. Can we provide per-table/per-peer synchronous replication which ships the according hlog entry of write before responsing write success to client? By this we can guarantee the client that all write requests for which he got success response when he wrote to A cluster must already have been in B cluster as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9328) Table web UI is corrupted sometime
[ https://issues.apache.org/jira/browse/HBASE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-9328: --- Status: Open (was: Patch Available) Table web UI is corrupted sometime -- Key: HBASE-9328 URL: https://issues.apache.org/jira/browse/HBASE-9328 Project: HBase Issue Type: Bug Affects Versions: 0.94.11, 0.95.2, 0.98.0 Reporter: Jimmy Xiang Assignee: Jean-Marc Spaggiari Labels: web-ui Attachments: HBASE-9328-v0-trunk.patch, HBASE-9328-v1-trunk.patch, HBASE-9328-v2-0.94.patch, HBASE-9328-v2-trunk.patch, HBASE-9328-v3-trunk.patch, HBASE-9328-v4-trunk.patch, table.png The web UI page source is like below: {noformat} h2Table Attributes/h2 table class=table table-striped tr thAttribute Name/th thValue/th thDescription/th /tr tr tdEnabled/td tdtrue/td tdIs the table enabled/td /tr tr tdCompaction/td td phr//p {noformat} No sure if it is a HBase issue, or a network/browser issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9328) Table web UI is corrupted sometime
[ https://issues.apache.org/jira/browse/HBASE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762165#comment-13762165 ] Jean-Marc Spaggiari commented on HBASE-9328: Thanks for looking at it Jimmy. I was also wondering where this error was coming from... I will trigger Hadoop QA again and see. Table web UI is corrupted sometime -- Key: HBASE-9328 URL: https://issues.apache.org/jira/browse/HBASE-9328 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.95.2, 0.94.11 Reporter: Jimmy Xiang Assignee: Jean-Marc Spaggiari Labels: web-ui Attachments: HBASE-9328-v0-trunk.patch, HBASE-9328-v1-trunk.patch, HBASE-9328-v2-0.94.patch, HBASE-9328-v2-trunk.patch, HBASE-9328-v3-trunk.patch, HBASE-9328-v4-trunk.patch, table.png The web UI page source is like below: {noformat} h2Table Attributes/h2 table class=table table-striped tr thAttribute Name/th thValue/th thDescription/th /tr tr tdEnabled/td tdtrue/td tdIs the table enabled/td /tr tr tdCompaction/td td phr//p {noformat} No sure if it is a HBase issue, or a network/browser issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9245) Remove dead or deprecated code from hbase 0.96
[ https://issues.apache.org/jira/browse/HBASE-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762140#comment-13762140 ] Jonathan Hsieh commented on HBASE-9245: --- Here's motivation for 0.96 vs 0.98: 1) There would be perf degradation due to api shimming at multiple locations that actually affects the common case (see HBASE-9359, and some of the grossness in the filter api shim). 2) There are many other apis that have been removed already, that break exiting apps so it seemed better to get them all at this time instead of guaranteeing that it will happen again. 3) I believe the changes are fairly minimal (a type change) and efforts are being made to minimize the impact after this type change is being done. 4) Not doing it now blocks a whole class of optimizations from coming in as minor feature additions until we have another chance to break the api. Here's motivation for the overall removal of KV from the client/common api: In 0.94, KV is a concrete implementation that is present on the serverside and the client side. The internal structure and layout is exposed such that all kvs are locked into being a single contiguous array/base pointer (for the entire KV) with offsets and lengths into this base pointer for each of the major fields (row, fam, qual, value, ts). This means each kv has a fully copy of all these fields. In 0.96 we've introduced encodings that break the single base pointer assumption. The Cell interface exposes what is essentially multiple base pointers (one for row, fam, qual, val, just returning long for ts). This will allow us to use more efficient encodings that can allow us to share rows/fams/quals from multiple KV's with the a single array. Currently in 0.96 there are two implementations of Cell -- KeyValue (a cell backed by a flat contiguous array), and PrefexTreeCell (a cell backed that uses multiple base pointers to share prefixes). Currently the PrefixTreeCell is only on the RS side (actually only at the store file I believe) and we have to do a bunch of interpreting on the RS side to convert to KV's, and then ship to clients. By changing the client/common API to only use the Cell interface, we decouple the interface from the implementation. This opens opportunities for push KV encodings up from the HFile level into the scanners, and the flexiblity to send encoded kvs to the client. It is important to do the client api first since these will be the longest living, and other changes from here on will be likely be internal to RS's or only additions to the rpc protocol which should not break compatibility as future 0.96 api+wire compatible hbases come around. Remove dead or deprecated code from hbase 0.96 -- Key: HBASE-9245 URL: https://issues.apache.org/jira/browse/HBASE-9245 Project: HBase Issue Type: Bug Reporter: Jonathan Hsieh This is an umbrella issue that will cover the removal or refactoring of dangling dead code and cruft. Some can make it into 0.96, some may have to wait for an 0.98. The great culling of code will be grouped patches that are logically related. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762161#comment-13762161 ] Jonathan Hsieh commented on HBASE-9359: --- So we should rename the Cell raw() to Cell rawCells() and provide a deprecated and inefficient KeyValue[] raw() method. For one dependent system we were talking about backporting parts of the cell api back to 0.94's kv (#getValueArray, #getRowArray, #getFamilyArray, #getColumArray) so that a conversion could be done in 0.94 in a way that is mostly ready work in 0.96. If [~lhofhansl] is amenable to that I could get that done hopefully in the next week. Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns
[ https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762168#comment-13762168 ] Hudson commented on HBASE-8930: --- SUCCESS: Integrated in HBase-0.94-security #287 (See [https://builds.apache.org/job/HBase-0.94-security/287/]) HBASE-8930 REVERT due to test issues (larsh: rev 1521219) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java Filter evaluates KVs outside requested columns -- Key: HBASE-8930 URL: https://issues.apache.org/jira/browse/HBASE-8930 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.94.7 Reporter: Federico Gaule Assignee: Vasu Mariyala Priority: Critical Labels: filters, hbase, keyvalue Fix For: 0.98.0, 0.94.13, 0.96.1 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, HBASE-8930-rev4.patch, HBASE-8930-rev5.patch 1- Fill row with some columns 2- Get row with some columns less than universe - Use filter to print kvs 3- Filter prints not requested columns Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL and prints KV's qualifier SUFFIX_0 = 0 SUFFIX_1 = 1 SUFFIX_4 = 4 SUFFIX_6 = 6 P= Persisted R= Requested E= Evaluated X= Returned | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 | 5606 |... | | P | P | | | P | P | | | P | P | |... | | R | R | R | | R | R | R | | | | |... | | E | E | | | E | E | | | {color:red}E{color} | | |... | | X | X | | | X | X | | | | | | {code:title=ExtraColumnTest.java|borderStyle=solid} @Test public void testFilter() throws Exception { Configuration config = HBaseConfiguration.create(); config.set(hbase.zookeeper.quorum, myZK); HTable hTable = new HTable(config, testTable); byte[] cf = Bytes.toBytes(cf); byte[] row = Bytes.toBytes(row); byte[] col1 = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_1)); byte[] col2 = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_1)); byte[] col3 = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_1)); byte[] col4 = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_1)); byte[] col5 = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_1)); byte[] col6 = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_1)); byte[] col1g = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_6)); byte[] col2g = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_6)); byte[] col1v = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_4)); byte[] col2v = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_4)); byte[] col3v = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_4)); byte[] col4v = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_4)); byte[] col5v = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_4)); byte[] col6v = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_4)); // === INSERTION =// Put put = new Put(row); put.add(cf, col1, Bytes.toBytes((short) 1)); put.add(cf, col2, Bytes.toBytes((short) 1)); put.add(cf, col3,
[jira] [Commented] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
[ https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762171#comment-13762171 ] Hudson commented on HBASE-9436: --- SUCCESS: Integrated in hbase-0.96 #24 (See [https://builds.apache.org/job/hbase-0.96/24/]) HBASE-9436 hbase.regionserver.handler.count default (nkeywal: rev 1521169) * /hbase/branches/0.96/hbase-server/src/test/resources/hbase-site.xml * /hbase/branches/0.96/src/main/docbkx/configuration.xml hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one - Key: HBASE-9436 URL: https://issues.apache.org/jira/browse/HBASE-9436 Project: HBase Issue Type: Bug Reporter: Nicolas Liochon Assignee: Nicolas Liochon Priority: Minor Fix For: 0.98.0, 0.96.0 Attachments: 9436.v1.patch, 9436.v2.patch Below what we have today. I vote for 10. configuration.xml The default of 10 is rather low common/hbase-site namehbase.regionserver.handler.count/name value30/value server/hbase-site value5/value descriptionCount of RPC Server instances spun up on RegionServers Same property is used by the HMaster for count of master handlers. Default is 10. === HMaster.java int numHandlers = conf.getInt(hbase.master.handler.count, conf.getInt(hbase.regionserver.handler.count, 25)); HRegionServer.java hbase.regionserver.handler.count: conf.getInt(hbase.regionserver.handler.count, 10), -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762145#comment-13762145 ] Hadoop QA commented on HBASE-9249: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602181/HBASE-9249_v7.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7095//console This message is automatically generated. Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.0 Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split
[ https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762145#comment-13762145 ] Hadoop QA commented on HBASE-9249: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602181/HBASE-9249_v7.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7095//console This message is automatically generated. Add cp hook before setting PONR in split Key: HBASE-9249 URL: https://issues.apache.org/jira/browse/HBASE-9249 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.0 Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch This hook helps to perform split on user region and corresponding index region such that both will be split or none. With this hook split for user and index region as follows user region === 1) Create splitting znode for user region split 2) Close parent user region 3) split user region storefiles 4) instantiate child regions of user region Through the new hook we can call index region transitions as below index region === 5) Create splitting znode for index region split 6) Close parent index region 7) Split storefiles of index region 8) instantiate child regions of the index region If any failures in 5,6,7,8 rollback the steps and return null, on null return throw exception to rollback for 1,2,3,4 9) set PONR 10) do batch put of offline and split entries for user and index regions index region 11) open daughers of index regions and transition znode to split. This step we will do through preSplitAfterPONR hook. Opening index regions before opening user regions helps to avoid put failures if there is colocation mismatch(this can happen if user regions opening completed but index regions opening in progress) user region === 12) open daughers of user regions and transition znode to split. Even if region server crashes also at the end both user and index regions will be split or none -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter
[ https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762139#comment-13762139 ] Nicolas Liochon commented on HBASE-9359: The issue I have with ycsb is that I would like to keep the compatibility with the previous hbase versions. I can do with two connectors, but it's not great. As well, I think it's representative of a simple client (while writable are a little bit less simple, as it means you're writing a filter). KeyValue is not deprecated in 0.94: we're going directly from standard to non existent. Note that I don't advocate so much for doing a deprecate in 0.96 and a delete in 0.98: as we want to have two close releases it's more work without much benefits. It would be interesting if someone could write his code in 0.94 in a way that won't break in 0.96. But even for something as simple as raw(), I'm not sure it's possible. I don't know how many applications out there are *not* using the API impacted by protobuf but *are* impacted by the KeyValue removal. If ycsb is the only one, it's an easy decision... Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter -- Key: HBASE-9359 URL: https://issues.apache.org/jira/browse/HBASE-9359 Project: HBase Issue Type: Sub-task Components: Client Affects Versions: 0.95.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.98.0, 0.96.0 Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch This path is the second half of eliminating KeyValue from the client interfaces. This percolated through quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns
[ https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762061#comment-13762061 ] Lars Hofhansl commented on HBASE-8930: -- A'right. Reverting from all branches in the next 10 mins unless I hear objections. Better to keep the codelines releasable and then regroup. Filter evaluates KVs outside requested columns -- Key: HBASE-8930 URL: https://issues.apache.org/jira/browse/HBASE-8930 Project: HBase Issue Type: Bug Components: Filters Affects Versions: 0.94.7 Reporter: Federico Gaule Assignee: Vasu Mariyala Priority: Critical Labels: filters, hbase, keyvalue Fix For: 0.98.0, 0.94.12, 0.96.1 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 0.96-HBASE-8930-rev2.patch, 8930-0.94.txt, HBASE-8930.patch, HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, HBASE-8930-rev4.patch, HBASE-8930-rev5.patch 1- Fill row with some columns 2- Get row with some columns less than universe - Use filter to print kvs 3- Filter prints not requested columns Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL and prints KV's qualifier SUFFIX_0 = 0 SUFFIX_1 = 1 SUFFIX_4 = 4 SUFFIX_6 = 6 P= Persisted R= Requested E= Evaluated X= Returned | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 | 5606 |... | | P | P | | | P | P | | | P | P | |... | | R | R | R | | R | R | R | | | | |... | | E | E | | | E | E | | | {color:red}E{color} | | |... | | X | X | | | X | X | | | | | | {code:title=ExtraColumnTest.java|borderStyle=solid} @Test public void testFilter() throws Exception { Configuration config = HBaseConfiguration.create(); config.set(hbase.zookeeper.quorum, myZK); HTable hTable = new HTable(config, testTable); byte[] cf = Bytes.toBytes(cf); byte[] row = Bytes.toBytes(row); byte[] col1 = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_1)); byte[] col2 = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_1)); byte[] col3 = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_1)); byte[] col4 = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_1)); byte[] col5 = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_1)); byte[] col6 = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_1)); byte[] col1g = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_6)); byte[] col2g = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_6)); byte[] col1v = new QualifierConverter().objectToByteArray(new Qualifier((short) 558, (byte) SUFFIX_4)); byte[] col2v = new QualifierConverter().objectToByteArray(new Qualifier((short) 559, (byte) SUFFIX_4)); byte[] col3v = new QualifierConverter().objectToByteArray(new Qualifier((short) 560, (byte) SUFFIX_4)); byte[] col4v = new QualifierConverter().objectToByteArray(new Qualifier((short) 561, (byte) SUFFIX_4)); byte[] col5v = new QualifierConverter().objectToByteArray(new Qualifier((short) 562, (byte) SUFFIX_4)); byte[] col6v = new QualifierConverter().objectToByteArray(new Qualifier((short) 563, (byte) SUFFIX_4)); // === INSERTION =// Put put = new Put(row); put.add(cf, col1, Bytes.toBytes((short) 1)); put.add(cf, col2, Bytes.toBytes((short) 1)); put.add(cf, col3, Bytes.toBytes((short) 3)); put.add(cf, col4, Bytes.toBytes((short) 3)); put.add(cf, col5, Bytes.toBytes((short) 3)); put.add(cf, col6, Bytes.toBytes((short) 3)); hTable.put(put); put = new Put(row); put.add(cf, col1v, Bytes.toBytes((short) 10)); put.add(cf, col2v, Bytes.toBytes((short) 10)); put.add(cf, col3v, Bytes.toBytes((short) 10)); put.add(cf, col4v, Bytes.toBytes((short) 10)); put.add(cf, col5v, Bytes.toBytes((short) 10)); put.add(cf, col6v, Bytes.toBytes((short) 10)); hTable.put(put); hTable.flushCommits(); //==READING=// Filter allwaysNextColFilter = new
[jira] [Updated] (HBASE-9328) Table web UI is corrupted sometime
[ https://issues.apache.org/jira/browse/HBASE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-9328: --- Status: Patch Available (was: Open) Table web UI is corrupted sometime -- Key: HBASE-9328 URL: https://issues.apache.org/jira/browse/HBASE-9328 Project: HBase Issue Type: Bug Affects Versions: 0.94.11, 0.95.2, 0.98.0 Reporter: Jimmy Xiang Assignee: Jean-Marc Spaggiari Labels: web-ui Attachments: HBASE-9328-v0-trunk.patch, HBASE-9328-v1-trunk.patch, HBASE-9328-v2-0.94.patch, HBASE-9328-v2-trunk.patch, HBASE-9328-v3-trunk.patch, HBASE-9328-v4-trunk.patch, table.png The web UI page source is like below: {noformat} h2Table Attributes/h2 table class=table table-striped tr thAttribute Name/th thValue/th thDescription/th /tr tr tdEnabled/td tdtrue/td tdIs the table enabled/td /tr tr tdCompaction/td td phr//p {noformat} No sure if it is a HBase issue, or a network/browser issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9375) [REST] Querying row data gives all the available versions of a column
[ https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9375: Attachment: HBASE-9375_trunk.00.patch [REST] Querying row data gives all the available versions of a column - Key: HBASE-9375 URL: https://issues.apache.org/jira/browse/HBASE-9375 Project: HBase Issue Type: Bug Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch In the hbase shell, when a user tries to get a value related to a column, hbase returns only the latest value. But using the REST API returns HColumnDescriptor.DEFAULT_VERSIONS versions by default. The behavior should be consistent with the hbase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9462) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762064#comment-13762064 ] Jean-Marc Spaggiari commented on HBASE-9462: Should it not throw an excption instead? Like TableDoesntExistException? HBaseAdmin#isTableEnabled() should return false for non-existent table -- Key: HBASE-9462 URL: https://issues.apache.org/jira/browse/HBASE-9462 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.96.0 Attachments: 9462.patch HBaseAdmin#isTableEnabled() returns true for a table which doesn't exist. We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9473) Change UI to list 'system tables' rather than 'catalog tables'.
stack created HBASE-9473: Summary: Change UI to list 'system tables' rather than 'catalog tables'. Key: HBASE-9473 URL: https://issues.apache.org/jira/browse/HBASE-9473 Project: HBase Issue Type: Bug Components: UI Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 9473.txt Minor, one-line, bit of polishing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9473) Change UI to list 'system tables' rather than 'catalog tables'.
[ https://issues.apache.org/jira/browse/HBASE-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9473: - Attachment: 9473.txt Change UI to list 'system tables' rather than 'catalog tables'. --- Key: HBASE-9473 URL: https://issues.apache.org/jira/browse/HBASE-9473 Project: HBase Issue Type: Bug Components: UI Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 9473.txt Minor, one-line, bit of polishing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9347) Support for enabling servlet filters for REST service
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-9347: Attachment: HBASE-9347_trunk.02.patch Support for enabling servlet filters for REST service - Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, HBASE-9347_trunk.01.patch, HBASE-9347_trunk.02.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9474) Cleanup of hbase script usage
stack created HBASE-9474: Summary: Cleanup of hbase script usage Key: HBASE-9474 URL: https://issues.apache.org/jira/browse/HBASE-9474 Project: HBase Issue Type: Bug Components: scripts Reporter: stack Assignee: stack Fix For: 0.96.0 Add in missing options. A little reformatting... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9474) Cleanup of hbase script usage
[ https://issues.apache.org/jira/browse/HBASE-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9474: - Attachment: 9474.txt Just cleans up usage. Cleanup of hbase script usage - Key: HBASE-9474 URL: https://issues.apache.org/jira/browse/HBASE-9474 Project: HBase Issue Type: Bug Components: scripts Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 9474.txt Add in missing options. A little reformatting... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira