[jira] [Updated] (HBASE-4039) Users should be able to choose custom TableInputFormats without modifying TableMapReduceUtil.initTableMapperJob().
[ https://issues.apache.org/jira/browse/HBASE-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-4039: - Resolution: Fixed Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) Applied to trunk. Thanks for patch Brock (and review Ted) Users should be able to choose custom TableInputFormats without modifying TableMapReduceUtil.initTableMapperJob(). -- Key: HBASE-4039 URL: https://issues.apache.org/jira/browse/HBASE-4039 Project: HBase Issue Type: New Feature Components: mapreduce Affects Versions: 0.90.3 Environment: HBase-0.90.3. OS, hardware specs not relevant. Reporter: Edward Choi Assignee: Brock Noland Priority: Minor Fix For: 0.90.4 Attachments: HBASE-4039.1.patch Currently, org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob() forces any Hbase job to use the default TableInputFormat.class as the job's input format class. job.setInputFormatClass(TableInputFormat.class); == This line is included in initTableMapperJob(). This restriction causes users to modify initTableMapperJob() in addition to implementing their own TableInputFormat. It would be nicer if users can use custom TableInputFormats without additionally tampering with HBase source code. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Jia updated HBASE-4181: --- Attachment: HBASE-4181.patch some modification HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4169) FSUtils LeaseRecovery for non HDFS FileSystems.
[ https://issues.apache.org/jira/browse/HBASE-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081470#comment-13081470 ] Hudson commented on HBASE-4169: --- Integrated in HBase-TRUNK #2097 (See [https://builds.apache.org/job/HBase-TRUNK/2097/]) HBASE-4169 FSUtils LeaseRecovery for non HDFS FileSystems stack : Files : * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/FSMapRUtils.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java FSUtils LeaseRecovery for non HDFS FileSystems. --- Key: HBASE-4169 URL: https://issues.apache.org/jira/browse/HBASE-4169 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.90.3, 0.90.4 Reporter: Lohit Vijayarenu Assignee: Lohit Vijayarenu Fix For: 0.92.0 Attachments: 4169-v4.txt, 4169-v5.txt, HBASE-4169.1.patch, HBASE-4169.2.patch, HBASE-4196.3.patch, HBASE-4196.3.v2.patch FSUtils.recoverFileLease uses HDFS's recoverLease method to get lease before splitting hlog file. This might not work for other filesystem implementations. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4179) Failed to run RowCounter on top of Hadoop branch-0.22
[ https://issues.apache.org/jira/browse/HBASE-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081469#comment-13081469 ] Hudson commented on HBASE-4179: --- Integrated in HBase-TRUNK #2097 (See [https://builds.apache.org/job/HBase-TRUNK/2097/]) HBASE-4179 Failed to run RowCounter on top of Hadoop branch-0.22 stack : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapred/Driver.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java Failed to run RowCounter on top of Hadoop branch-0.22 - Key: HBASE-4179 URL: https://issues.apache.org/jira/browse/HBASE-4179 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.92.0 Environment: Running hbase on top of hadoop branch-0.22 Reporter: Michael Weng Assignee: Michael Weng Fix For: 0.92.0 Attachments: HBASE-4179-trunk.patch :~/hadoop$ HADOOP_CLASSPATH=`~/hbase/bin/hbase classpath` bin/hadoop jar ~/hbase/hbase-0.91.0-SNAPSHOT.jar rowcounter usertable Exception in thread main java.lang.NoSuchMethodError: org.apache.hadoop.util.ProgramDriver.driver([Ljava/lang/String;)V at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:51) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:192) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4152) Rename o.a.h.h.regionserver.wal.WALObserver to o.a.h.h.regionserver.wal.WALActionsListener
[ https://issues.apache.org/jira/browse/HBASE-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081468#comment-13081468 ] Hudson commented on HBASE-4152: --- Integrated in HBase-TRUNK #2097 (See [https://builds.apache.org/job/HBase-TRUNK/2097/]) HBASE-4152 Rename o.a.h.h.regionserver.wal.WALObserver to o.a.h.h.regionserver.wal.WALActionsListener tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALObserver.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALObserver.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java Rename o.a.h.h.regionserver.wal.WALObserver to o.a.h.h.regionserver.wal.WALActionsListener --- Key: HBASE-4152 URL: https://issues.apache.org/jira/browse/HBASE-4152 Project: HBase Issue Type: Sub-task Components: regionserver Reporter: Andrew Purtell Assignee: Ted Yu Fix For: 0.92.0 Attachments: 4152.txt Rename o.a.h.h.regionserver.wal.WALObserver to o.a.h.h.regionserver.wal.WALActionsListener -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081471#comment-13081471 ] Liu Jia commented on HBASE-4181: @Ted With this line {noformat} String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); {noformat} if the isa is not null, rsName will be initiated by isa.toString() method, So the above part and the following part should be changed to InetSocketAddress.createUnresolved(host,port) together to avoid DNS query. {noformat} this.servers.put(address.toString(), server); {noformat} HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081472#comment-13081472 ] jirapos...@reviews.apache.org commented on HBASE-4181: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1402/ --- (Updated 2011-08-09 06:41:43.389550) Review request for hbase. Changes --- to avoid DNS query. Summary --- HConnectionManager can't find cached HRegionInterface makes client very slow Addressing.createHostAndPortStr(hostname, port); //the Addressing created a address like node41:60010 .. this.servers.put(address.toString(), server); //but here address.toString() send an address like node41/10.61.2l.171:60010 This addresses bug HBase-4181. https://issues.apache.org/jira/browse/HBase-4181 Diffs (updated) - http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java 1155226 Diff: https://reviews.apache.org/r/1402/diff Testing --- Tests passed locally. Thanks, Jia HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Jia updated HBASE-4181: --- Attachment: HBASE-4181-trunk-v2.patch HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081476#comment-13081476 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/ --- (Updated 2011-08-09 06:46:26.124460) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. Changes --- 1. Added author and date to the javadocs and put in a reference to the relevant JIRA 2. Fixed some minor formatting and spelling issues Summary --- https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in HBASE-1744 There is document attached to the HBASE-4176 JIRA that describes this patch in further detail This addresses bug HBASE-4176. https://issues.apache.org/jira/browse/HBASE-4176 Diffs (updated) - /src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION /src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION /src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155098 /src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION Diff: https://reviews.apache.org/r/1326/diff Testing --- patch includes one test: TestParseFilter.java Thanks, Anirudh Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase Issue Type: Improvement Components: thrift Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: Filter Language.docx, HBASE-4176.patch Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in the HBASE-1744 JIRA (https://issues.apache.org/jira/browse/HBASE-1744) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081478#comment-13081478 ] Anirudh Todi commented on HBASE-4176: - @Stack and @Ted - thanks for reviewing the patch! The patch does not depend on hbase-1744. @Stack - I am not changing the meaning of LESS_OR_EQUAL, GREATER etc - if you look at the existing version of CompareFilter, LESS_OR_EQUAL seems to correspond to instead of = and vice verse for LESS. The same goes for GREATER and GREATER_OR_EQUAL and for EQUAL and NOT_EQUAL @Stack when you say you should be able to specify a filter using the full package name - does that mean the filter string could look like: {noformat} org.apache.hadoop.hbase.filter.KeyOnlyFilter() {noformat} @Stack - I thought a great deal about how I would implement it without hard-coding the names of the filters I am supporting. However, I was unable to come up with anything. if you or anyone else has an idea - I'd love to hear it @Stack - I would have to look into how we would extend this mini-language to allow one to construct Filter objects from the shell. It's an interesting idea though! Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase Issue Type: Improvement Components: thrift Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: Filter Language.docx, HBASE-4176.patch Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in the HBASE-1744 JIRA (https://issues.apache.org/jira/browse/HBASE-1744) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HBASE-4169) FSUtils LeaseRecovery for non HDFS FileSystems.
[ https://issues.apache.org/jira/browse/HBASE-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu reopened HBASE-4169: - Sorry guys. In hurry I ended up adding wrong FSMapRUtils. I will update patch soon against trunk. Please could we include this. FSUtils LeaseRecovery for non HDFS FileSystems. --- Key: HBASE-4169 URL: https://issues.apache.org/jira/browse/HBASE-4169 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.90.3, 0.90.4 Reporter: Lohit Vijayarenu Assignee: Lohit Vijayarenu Fix For: 0.92.0 Attachments: 4169-v4.txt, 4169-v5.txt, HBASE-4169.1.patch, HBASE-4169.2.patch, HBASE-4196.3.patch, HBASE-4196.3.v2.patch FSUtils.recoverFileLease uses HDFS's recoverLease method to get lease before splitting hlog file. This might not work for other filesystem implementations. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4169) FSUtils LeaseRecovery for non HDFS FileSystems.
[ https://issues.apache.org/jira/browse/HBASE-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lohit Vijayarenu updated HBASE-4169: Attachment: 4169-correction.txt Please could we patch this. I have tested ths internally. FSUtils LeaseRecovery for non HDFS FileSystems. --- Key: HBASE-4169 URL: https://issues.apache.org/jira/browse/HBASE-4169 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.90.3, 0.90.4 Reporter: Lohit Vijayarenu Assignee: Lohit Vijayarenu Fix For: 0.92.0 Attachments: 4169-correction.txt, 4169-v4.txt, 4169-v5.txt, HBASE-4169.1.patch, HBASE-4169.2.patch, HBASE-4196.3.patch, HBASE-4196.3.v2.patch FSUtils.recoverFileLease uses HDFS's recoverLease method to get lease before splitting hlog file. This might not work for other filesystem implementations. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Pi updated HBASE-4027: - Attachment: hbase-4027-v10.diff Enable direct byte buffers LruBlockCache Key: HBASE-4027 URL: https://issues.apache.org/jira/browse/HBASE-4027 Project: HBase Issue Type: Improvement Reporter: Jason Rutherglen Assignee: Li Pi Priority: Minor Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.diff, hbase-4027v6.diff, slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, slabcachepatchv4.diff Java offers the creation of direct byte buffers which are allocated outside of the heap. They need to be manually free'd, which can be accomplished using an documented {{clean}} method. The feature will be optional. After implementing, we can benchmark for differences in speed and garbage collection observances. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081498#comment-13081498 ] jirapos...@reviews.apache.org commented on HBASE-4027: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1214/ --- (Updated 2011-08-09 07:52:09.666709) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, Jonathan Gray, and Li Pi. Changes --- Rebased onto 3857. Configuration added as well. Summary --- Review request - I apparently can't edit tlipcon's earlier posting of my diff, so creating a new one. This addresses bug HBase-4027. https://issues.apache.org/jira/browse/HBase-4027 Diffs (updated) - CHANGES.txt 7c6f592 conf/hbase-env.sh 2d55d27 pom.xml 0f24681 src/docbkx/book.xml 2c19cef src/docbkx/configuration.xml 3595e76 src/docbkx/developer.xml a3e22ea src/docbkx/performance.xml d8e104f src/docbkx/troubleshooting.xml 9f93cd9 src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java 2d4002c src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 097dc50 src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockCacheTestUtils.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java 88aa652 src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java 886c31d src/main/java/org/apache/hadoop/hbase/io/hfile/SingleSizeCacheTestUtils.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabItemEvictionWatcher.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/ipc/Delayable.java 56f0650 src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java f223e46 src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 6fb1da7 src/main/java/org/apache/hadoop/hbase/mapred/Driver.java d38956c src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java dda4241 src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java ff05df8 src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 86652c0 src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java 8e87c83 src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java 94c8bb4 src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java 887f736 src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java 9d87fa5 src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java 151c90b src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALObserver.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java 8e1f6e0 src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/util/FSHDFSUtils.java dcd0937 src/main/java/org/apache/hadoop/hbase/util/FSMapRUtils.java edca7f4 src/main/java/org/apache/hadoop/hbase/util/FSUtils.java 0729410 src/main/java/org/apache/hadoop/hbase/util/PoolMap.java ebf3078 src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java a0bef34 src/main/ruby/hbase/admin.rb 4460d6e src/site/xdoc/index.xml 8e1b531 src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSingleSizeCache.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlab.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlabCache.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java 0b21c6c src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java 6d83b00 src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java 4387170 src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java b4c407b src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java dc43eb2 src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALObserver.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java 381ac90 src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java 2c565d7 Diff: https://reviews.apache.org/r/1214/diff Testing --- Ran benchmarks against it in HBase standalone mode. Wrote test cases for all classes, multithreaded test cases exist for the cache. Thanks, Li Enable direct byte
[jira] [Commented] (HBASE-4039) Users should be able to choose custom TableInputFormats without modifying TableMapReduceUtil.initTableMapperJob().
[ https://issues.apache.org/jira/browse/HBASE-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081511#comment-13081511 ] Hudson commented on HBASE-4039: --- Integrated in HBase-TRUNK #2098 (See [https://builds.apache.org/job/HBase-TRUNK/2098/]) HBASE-4039 Users should be able to choose custom TableInputFormats without modifying TableMapReduceUtil.initTableMapperJob() stack : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java * /hbase/trunk/CHANGES.txt Users should be able to choose custom TableInputFormats without modifying TableMapReduceUtil.initTableMapperJob(). -- Key: HBASE-4039 URL: https://issues.apache.org/jira/browse/HBASE-4039 Project: HBase Issue Type: New Feature Components: mapreduce Affects Versions: 0.90.3 Environment: HBase-0.90.3. OS, hardware specs not relevant. Reporter: Edward Choi Assignee: Brock Noland Priority: Minor Fix For: 0.90.4 Attachments: HBASE-4039.1.patch Currently, org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob() forces any Hbase job to use the default TableInputFormat.class as the job's input format class. job.setInputFormatClass(TableInputFormat.class); == This line is included in initTableMapperJob(). This restriction causes users to modify initTableMapperJob() in addition to implementing their own TableInputFormat. It would be nicer if users can use custom TableInputFormats without additionally tampering with HBase source code. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081519#comment-13081519 ] jirapos...@reviews.apache.org commented on HBASE-4027: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1214/ --- (Updated 2011-08-09 08:36:02.175711) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, Jonathan Gray, and Li Pi. Changes --- Fixed diff. Rebase on 3857 works! Summary --- Review request - I apparently can't edit tlipcon's earlier posting of my diff, so creating a new one. This addresses bug HBase-4027. https://issues.apache.org/jira/browse/HBase-4027 Diffs (updated) - conf/hbase-env.sh 2d55d27 src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java 2d4002c src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 097dc50 src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockCacheTestUtils.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java 886c31d src/main/java/org/apache/hadoop/hbase/io/hfile/SingleSizeCacheTestUtils.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabItemEvictionWatcher.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 86652c0 src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java 94c8bb4 src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSingleSizeCache.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlab.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlabCache.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java 4387170 Diff: https://reviews.apache.org/r/1214/diff Testing --- Ran benchmarks against it in HBase standalone mode. Wrote test cases for all classes, multithreaded test cases exist for the cache. Thanks, Li Enable direct byte buffers LruBlockCache Key: HBASE-4027 URL: https://issues.apache.org/jira/browse/HBASE-4027 Project: HBase Issue Type: Improvement Reporter: Jason Rutherglen Assignee: Li Pi Priority: Minor Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.diff, hbase-4027v6.diff, slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, slabcachepatchv4.diff Java offers the creation of direct byte buffers which are allocated outside of the heap. They need to be manually free'd, which can be accomplished using an documented {{clean}} method. The feature will be optional. After implementing, we can benchmark for differences in speed and garbage collection observances. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Pi updated HBASE-4027: - Attachment: hbase-4027-v10.5.diff ignore v10 diff, that is just broken. this is the fixed one. Enable direct byte buffers LruBlockCache Key: HBASE-4027 URL: https://issues.apache.org/jira/browse/HBASE-4027 Project: HBase Issue Type: Improvement Reporter: Jason Rutherglen Assignee: Li Pi Priority: Minor Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.5.diff, hbase-4027-v10.diff, hbase-4027v6.diff, slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, slabcachepatchv4.diff Java offers the creation of direct byte buffers which are allocated outside of the heap. They need to be manually free'd, which can be accomplished using an documented {{clean}} method. The feature will be optional. After implementing, we can benchmark for differences in speed and garbage collection observances. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081529#comment-13081529 ] jirapos...@reviews.apache.org commented on HBASE-4181: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1402/#review1339 --- Version 2 looks good. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java https://reviews.apache.org/r/1402/#comment3028 Line is too long. Wrap so that it is under 80 chars long. - Ted On 2011-08-09 06:41:43, Jia Liu wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1402/ bq. --- bq. bq. (Updated 2011-08-09 06:41:43) bq. bq. bq. Review request for hbase. bq. bq. bq. Summary bq. --- bq. bq. HConnectionManager can't find cached HRegionInterface makes client very slow bq. Addressing.createHostAndPortStr(hostname, port); //the Addressing created a address like node41:60010 bq. .. bq. this.servers.put(address.toString(), server); bq. bq. //but here address.toString() send an address like node41/10.61.2l.171:60010 bq. bq. bq. This addresses bug HBase-4181. bq. https://issues.apache.org/jira/browse/HBase-4181 bq. bq. bq. Diffs bq. - bq. bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java 1155226 bq. bq. Diff: https://reviews.apache.org/r/1402/diff bq. bq. bq. Testing bq. --- bq. bq. Tests passed locally. bq. bq. bq. Thanks, bq. bq. Jia bq. bq. HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081544#comment-13081544 ] jirapos...@reviews.apache.org commented on HBASE-4181: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1402/ --- (Updated 2011-08-09 09:41:48.573628) Review request for hbase. Changes --- Corrected formatting. Summary --- HConnectionManager can't find cached HRegionInterface makes client very slow Addressing.createHostAndPortStr(hostname, port); //the Addressing created a address like node41:60010 .. this.servers.put(address.toString(), server); //but here address.toString() send an address like node41/10.61.2l.171:60010 This addresses bug HBase-4181. https://issues.apache.org/jira/browse/HBase-4181 Diffs (updated) - http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java 1155226 Diff: https://reviews.apache.org/r/1402/diff Testing --- Tests passed locally. Thanks, Jia HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4182) NullPointerException when loadbalancer tries to close the region for reassigning to new RS.
[ https://issues.apache.org/jira/browse/HBASE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081556#comment-13081556 ] ramkrishna.s.vasudevan commented on HBASE-4182: --- {noformat} 2011-08-09 16:04:57,429 INFO org.apache.hadoop.hbase.master.LoadBalancer: Calculated a load balance in 1ms. Moving 3 regions off of 1 overloaded servers onto 1 less loaded servers 2011-08-09 16:04:57,429 INFO org.apache.hadoop.hbase.master.HMaster: balance hri=t5,,1312885663262.e59e66fa2535da2a2feccdffbf22063c., src=linux76,60020,1312885513666, dest=linux146,60020,1312885914318 2011-08-09 16:05:06,832 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of region t5,,1312885663262.e59e66fa2535da2a2feccdffbf22063c. (offlining) 2011-08-09 16:05:49,343 ERROR org.apache.hadoop.hbase.master.HMaster$1: Caught exception java.lang.NullPointerException at org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1563) at org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1527) at org.apache.hadoop.hbase.master.AssignmentManager.balance(AssignmentManager.java:2535) at org.apache.hadoop.hbase.master.HMaster.balance(HMaster.java:832) at org.apache.hadoop.hbase.master.HMaster$1.chore(HMaster.java:704) at org.apache.hadoop.hbase.Chore.run(Chore.java:66) {noformat} NullPointerException when loadbalancer tries to close the region for reassigning to new RS. --- Key: HBASE-4182 URL: https://issues.apache.org/jira/browse/HBASE-4182 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan 1. Start 2 RS. Create some regions so that is is balanced. 2. Stop RS2. Now all the Regions from RS2 are assigned to RS1. 3. Again start RS2. 4. Load Balancing is calculated and few regions from RS1 are assigned to RS2. As part of this step Master tries to unassign the regions from RS1. {noformat} RegionTransitionData data = ZKAssign.getDataNoWatch(zkw, ZKAssign .getNodeName(zkw, region.getEncodedName()), null); if (data.equals(EventType.RS_ZK_REGION_CLOSING)) { ZKAssign.createNodeClosing(zkw, region, master.getServerName()); } {noformat} Now there is no data present in the unassigned node. We are directly comparing the data. Here data is null. Hence nullpointer exception is thrown. Hence load balancing fails. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4182) NullPointerException when loadbalancer tries to close the region for reassigning to new RS.
NullPointerException when loadbalancer tries to close the region for reassigning to new RS. --- Key: HBASE-4182 URL: https://issues.apache.org/jira/browse/HBASE-4182 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan 1. Start 2 RS. Create some regions so that is is balanced. 2. Stop RS2. Now all the Regions from RS2 are assigned to RS1. 3. Again start RS2. 4. Load Balancing is calculated and few regions from RS1 are assigned to RS2. As part of this step Master tries to unassign the regions from RS1. {noformat} RegionTransitionData data = ZKAssign.getDataNoWatch(zkw, ZKAssign .getNodeName(zkw, region.getEncodedName()), null); if (data.equals(EventType.RS_ZK_REGION_CLOSING)) { ZKAssign.createNodeClosing(zkw, region, master.getServerName()); } {noformat} Now there is no data present in the unassigned node. We are directly comparing the data. Here data is null. Hence nullpointer exception is thrown. Hence load balancing fails. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4182) NullPointerException when loadbalancer tries to close the region for reassigning to new RS.
[ https://issues.apache.org/jira/browse/HBASE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-4182: -- Fix Version/s: 0.92.0 NullPointerException when loadbalancer tries to close the region for reassigning to new RS. --- Key: HBASE-4182 URL: https://issues.apache.org/jira/browse/HBASE-4182 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.0 1. Start 2 RS. Create some regions so that is is balanced. 2. Stop RS2. Now all the Regions from RS2 are assigned to RS1. 3. Again start RS2. 4. Load Balancing is calculated and few regions from RS1 are assigned to RS2. As part of this step Master tries to unassign the regions from RS1. {noformat} RegionTransitionData data = ZKAssign.getDataNoWatch(zkw, ZKAssign .getNodeName(zkw, region.getEncodedName()), null); if (data.equals(EventType.RS_ZK_REGION_CLOSING)) { ZKAssign.createNodeClosing(zkw, region, master.getServerName()); } {noformat} Now there is no data present in the unassigned node. We are directly comparing the data. Here data is null. Hence nullpointer exception is thrown. Hence load balancing fails. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4182) NullPointerException when loadbalancer tries to close the region for reassigning to new RS.
[ https://issues.apache.org/jira/browse/HBASE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan resolved HBASE-4182. --- Resolution: Invalid NullPointerException when loadbalancer tries to close the region for reassigning to new RS. --- Key: HBASE-4182 URL: https://issues.apache.org/jira/browse/HBASE-4182 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.0 1. Start 2 RS. Create some regions so that is is balanced. 2. Stop RS2. Now all the Regions from RS2 are assigned to RS1. 3. Again start RS2. 4. Load Balancing is calculated and few regions from RS1 are assigned to RS2. As part of this step Master tries to unassign the regions from RS1. {noformat} RegionTransitionData data = ZKAssign.getDataNoWatch(zkw, ZKAssign .getNodeName(zkw, region.getEncodedName()), null); if (data.equals(EventType.RS_ZK_REGION_CLOSING)) { ZKAssign.createNodeClosing(zkw, region, master.getServerName()); } {noformat} Now there is no data present in the unassigned node. We are directly comparing the data. Here data is null. Hence nullpointer exception is thrown. Hence load balancing fails. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4182) NullPointerException when loadbalancer tries to close the region for reassigning to new RS.
[ https://issues.apache.org/jira/browse/HBASE-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081572#comment-13081572 ] ramkrishna.s.vasudevan commented on HBASE-4182: --- In my code base i had that piece of code. but after investigation found that the piece of code no longer existed. so invalidating. Sorry for the inconvenience caused. NullPointerException when loadbalancer tries to close the region for reassigning to new RS. --- Key: HBASE-4182 URL: https://issues.apache.org/jira/browse/HBASE-4182 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.0 1. Start 2 RS. Create some regions so that is is balanced. 2. Stop RS2. Now all the Regions from RS2 are assigned to RS1. 3. Again start RS2. 4. Load Balancing is calculated and few regions from RS1 are assigned to RS2. As part of this step Master tries to unassign the regions from RS1. {noformat} RegionTransitionData data = ZKAssign.getDataNoWatch(zkw, ZKAssign .getNodeName(zkw, region.getEncodedName()), null); if (data.equals(EventType.RS_ZK_REGION_CLOSING)) { ZKAssign.createNodeClosing(zkw, region, master.getServerName()); } {noformat} Now there is no data present in the unassigned node. We are directly comparing the data. Here data is null. Hence nullpointer exception is thrown. Hence load balancing fails. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3807) Fix units in RS UI metrics
[ https://issues.apache.org/jira/browse/HBASE-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081586#comment-13081586 ] subramanian raghunathan commented on HBASE-3807: Proposing to change from the following formats request=0.0, regions=8, stores=8, storefiles=8, storefileIndexSize=0, rootIndexSizeKB=3, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, memstoreSize=0, readRequestsCount=56, writeRequestsCount=14, compactionQueueSize=0, flushQueueSize=0, usedHeap=41, maxHeap=995, blockCacheSize=1767336, blockCacheFree=207082792, blockCacheCount=9, blockCacheHitCount=50, blockCacheMissCount=9, blockCacheEvictedCount=0, blockCacheHitRatio=84, blockCacheHitCachingRatio=84 stores=1, storefiles=2, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=8, writeRequestsCount=2, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0 to {color:red}requestsPerSecond=0.0, numberOfOnlineRegions=8, numberOfStores=8, numberOfStorefiles=10, storefileIndexSizeMB=0{color}, rootIndexSizeKB=4, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, {color:red}memstoreSizeMB=0{color}, readRequestsCount=48, writeRequestsCount=8, compactionQueueSize=0, flushQueueSize=0,{color:red}usedHeapMB=34, maxHeapMB=995, blockCacheSizeMB=1.6885986, blockCacheFreeMB=197.4864{color}, blockCacheCount=11, blockCacheHitCount=53, blockCacheMissCount=11, blockCacheEvictedCount=0, {color:red}blockCacheHitRatio=82%, blockCacheHitCachingRatio=82%{color} {color:red}numberOfStores=1, numberOfStorefiles=3{color}, storefileUncompressedSizeMB=0, storefileSizeMB=0,memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=8, writeRequestsCount=2, rootIndexSizeKB=1, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0 Also proposing the following metric in bytes blockCacheSize ,blockCacheFree planning to MB with full precision like {color:green}blockCacheSizeMB=1.6885986, blockCacheFreeMB=197.4864{color} If this looks good ,i can upload the patch for trunk. If needed i can prepare the patch for 90.X also. Fix units in RS UI metrics -- Key: HBASE-3807 URL: https://issues.apache.org/jira/browse/HBASE-3807 Project: HBase Issue Type: Bug Reporter: stack Fix For: 0.94.0 Currently the metrics are a mix of MB and bytes. Its confusing. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081613#comment-13081613 ] jirapos...@reviews.apache.org commented on HBASE-4120: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1421/ --- Review request for hbase. Summary --- Patch used for table priority alone,In this patch, not only tables can have different priorities but also the different actions like get,scan,put and delete can have priorities. This addresses bug HBase-4120. https://issues.apache.org/jira/browse/HBase-4120 Diffs - http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java PRE-CREATION Diff: https://reviews.apache.org/r/1421/diff Testing --- Tested with test cases in TestCase_For_TablePriority_trunk_v1.patch please apply the patch of HBASE-4181 first,in some circumstances this bug will affect the performance of client. Thanks, Jia isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081618#comment-13081618 ] jirapos...@reviews.apache.org commented on HBASE-4120: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1422/ --- Review request for hbase. Summary --- Test cases used for table priority. This addresses bug HBase-4120. https://issues.apache.org/jira/browse/HBase-4120 Diffs - http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForActionPriority.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForPriorityJobQueue.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForTablePriority.java PRE-CREATION Diff: https://reviews.apache.org/r/1422/diff Testing --- Thanks, Jia isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081623#comment-13081623 ] jirapos...@reviews.apache.org commented on HBASE-4120: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1421/ --- (Updated 2011-08-09 13:38:32.587600) Review request for hbase. Changes --- dos2unix formatted Summary --- Patch used for table priority alone,In this patch, not only tables can have different priorities but also the different actions like get,scan,put and delete can have priorities. This addresses bug HBase-4120. https://issues.apache.org/jira/browse/HBase-4120 Diffs (updated) - http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java PRE-CREATION Diff: https://reviews.apache.org/r/1421/diff Testing --- Tested with test cases in TestCase_For_TablePriority_trunk_v1.patch please apply the patch of HBASE-4181 first,in some circumstances this bug will affect the performance of client. Thanks, Jia isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081643#comment-13081643 ] jirapos...@reviews.apache.org commented on HBASE-4181: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1402/#review1343 --- Ship it! - Ted On 2011-08-09 09:41:48, Jia Liu wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1402/ bq. --- bq. bq. (Updated 2011-08-09 09:41:48) bq. bq. bq. Review request for hbase. bq. bq. bq. Summary bq. --- bq. bq. HConnectionManager can't find cached HRegionInterface makes client very slow bq. Addressing.createHostAndPortStr(hostname, port); //the Addressing created a address like node41:60010 bq. .. bq. this.servers.put(address.toString(), server); bq. bq. //but here address.toString() send an address like node41/10.61.2l.171:60010 bq. bq. bq. This addresses bug HBase-4181. bq. https://issues.apache.org/jira/browse/HBase-4181 bq. bq. bq. Diffs bq. - bq. bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java 1155226 bq. bq. Diff: https://reviews.apache.org/r/1402/diff bq. bq. bq. Testing bq. --- bq. bq. Tests passed locally. bq. bq. bq. Thanks, bq. bq. Jia bq. bq. HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081666#comment-13081666 ] jirapos...@reviews.apache.org commented on HBASE-4120: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1421/#review1345 --- http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java https://reviews.apache.org/r/1421/#comment3036 Wrap long line please. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java https://reviews.apache.org/r/1421/#comment3034 This should be in the same case as put. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java https://reviews.apache.org/r/1421/#comment3035 The variable should be named actionPriorities. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3037 Year should be 2011. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3038 White space. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3033 Why not compare size against this.capacity ? http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3032 InterruptedException shouldn't be ignored. You can wrap it in InterruptedIOException and rethrow. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3039 Explanation for parameter should be on the same line. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3041 PriorityAddTimes should start with lowercase p. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3042 Please name this method increasePriority. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3043 Please name this method increasePriority. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3040 This is not needed. add() can be declared to throw InterruptedException. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java https://reviews.apache.org/r/1421/#comment3044 Return value should be boolean. - Ted On 2011-08-09 13:38:32, Jia Liu wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1421/ bq. --- bq. bq. (Updated 2011-08-09 13:38:32) bq. bq. bq. Review request for hbase. bq. bq. bq. Summary bq. --- bq. bq. Patch used for table priority alone,In this patch, not only tables can have different priorities but also the different actions like get,scan,put and delete can have priorities. bq. bq. bq. This addresses bug HBase-4120. bq. https://issues.apache.org/jira/browse/HBase-4120 bq. bq. bq. Diffs bq. - bq. bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java 1155226 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 1155226 bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java PRE-CREATION bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java PRE-CREATION bq. bq. Diff: https://reviews.apache.org/r/1421/diff bq. bq. bq. Testing bq. --- bq. bq. Tested with test cases in TestCase_For_TablePriority_trunk_v1.patch bq. please apply the patch of HBASE-4181 first,in some circumstances this bug will affect the performance of client. bq. bq. bq. Thanks, bq. bq. Jia bq. bq. isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project:
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081667#comment-13081667 ] jirapos...@reviews.apache.org commented on HBASE-4120: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1421/ --- (Updated 2011-08-09 14:41:11.616392) Review request for hbase. Changes --- Add test cases. Summary (updated) --- Patch used for table priority alone,In this patch, not only tables can have different priorities but also the different actions like get,scan,put and delete can have priorities. This addresses bug HBase-4120. https://issues.apache.org/jira/browse/HBase-4120 Diffs (updated) - http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForActionPriority.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForPriorityJobQueue.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForTablePriority.java PRE-CREATION Diff: https://reviews.apache.org/r/1421/diff Testing --- Tested with test cases in TestCase_For_TablePriority_trunk_v1.patch please apply the patch of HBASE-4181 first,in some circumstances this bug will affect the performance of client. Thanks, Jia isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081670#comment-13081670 ] jirapos...@reviews.apache.org commented on HBASE-4120: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1421/ --- (Updated 2011-08-09 14:43:34.859289) Review request for hbase. Summary (updated) --- Patch used for table priority alone,In this patch, not only tables can have different priorities but also the different actions like get,scan,put and delete can have priorities. This addresses bug HBase-4120. https://issues.apache.org/jira/browse/HBase-4120 Diffs - http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 1155226 http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityHBaseServer.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/PriorityJobQueue.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForActionPriority.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForPriorityJobQueue.java PRE-CREATION http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/allocation/test/TestForTablePriority.java PRE-CREATION Diff: https://reviews.apache.org/r/1421/diff Testing --- Tested with test cases in TestCase_For_TablePriority_trunk_v1.patch please apply the patch of HBASE-4181 first,in some circumstances this bug will affect the performance of client. Thanks, Jia isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Jia updated HBASE-4181: --- Attachment: HBASE-4181-trunk-v3.patch HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Jia updated HBASE-4181: --- Fix Version/s: 0.90.4 Release Note: remove DNS query part from getHRegionConnection() Hadoop Flags: [Reviewed] Status: Patch Available (was: Open) HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.90.4 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liu Jia updated HBASE-4181: --- Release Note: Use the host and port string as the key of the MapString, HRegionInterface servers to avoid cache misses caused by DNS resolution. HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.90.4 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-4181: -- Release Note: Use the host and port string as the key of the MapString, HRegionInterface servers to avoid cache misses caused by different format of String returned by InetSocketAddress . (was: Use the host and port string as the key of the MapString, HRegionInterface servers to avoid cache misses caused by DNS resolution.) HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.90.4 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081732#comment-13081732 ] jirapos...@reviews.apache.org commented on HBASE-4181: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1402/#review1348 --- Ship it! Nice fix. Thanks for doing it. - Michael On 2011-08-09 09:41:48, Jia Liu wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1402/ bq. --- bq. bq. (Updated 2011-08-09 09:41:48) bq. bq. bq. Review request for hbase. bq. bq. bq. Summary bq. --- bq. bq. HConnectionManager can't find cached HRegionInterface makes client very slow bq. Addressing.createHostAndPortStr(hostname, port); //the Addressing created a address like node41:60010 bq. .. bq. this.servers.put(address.toString(), server); bq. bq. //but here address.toString() send an address like node41/10.61.2l.171:60010 bq. bq. bq. This addresses bug HBase-4181. bq. https://issues.apache.org/jira/browse/HBase-4181 bq. bq. bq. Diffs bq. - bq. bq. http://svn.apache.org/repos/asf/hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java 1155226 bq. bq. Diff: https://reviews.apache.org/r/1402/diff bq. bq. bq. Testing bq. --- bq. bq. Tests passed locally. bq. bq. bq. Thanks, bq. bq. Jia bq. bq. HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.90.4 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-4181: - Resolution: Fixed Fix Version/s: (was: 0.90.4) 0.92.0 Assignee: Liu Jia Status: Resolved (was: Patch Available) Committed to TRUNK. Thank you for the patch Liu Jia and Ted for review. HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.4, 0.92.0 Reporter: Liu Jia Assignee: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.92.0 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081742#comment-13081742 ] Todd Lipcon commented on HBASE-4120: I'm still not convinced that this can't be done using the existing QoSFunction support within HBaseRPC. Look at our existing implementation - it already does prioritize both by type of operation and by target table. It needs to be extended to use the table descriptors, and the queues in RpcServer need to be modified a bit more to be a priority queue, but the basics are already there. Subclassing is not the right approach. isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081758#comment-13081758 ] stack commented on HBASE-4120: -- I'd also encourage study of existing QoSFunction. There would need to be a good reason for by-passing the existing prioritization chassis. isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3807) Fix units in RS UI metrics
[ https://issues.apache.org/jira/browse/HBASE-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081763#comment-13081763 ] stack commented on HBASE-3807: -- Your proposal looks excellent Subramanian. Would suggest two digits after the decimal point only is needed (There is a limitDecimalTo2 in Hadoop StringUtils class if that is of any help). Fix units in RS UI metrics -- Key: HBASE-3807 URL: https://issues.apache.org/jira/browse/HBASE-3807 Project: HBase Issue Type: Bug Reporter: stack Fix For: 0.94.0 Currently the metrics are a mix of MB and bytes. Its confusing. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081778#comment-13081778 ] Ted Yu commented on HBASE-4120: --- Todd was referring to this queue in HBaseServer: {code} protected BlockingQueueCall priorityCallQueue; {code} See processData() method: {code} Writable param = ReflectionUtils.newInstance(paramClass, conf); // read param param.readFields(dis); Call call = new Call(id, param, this, responder); if (priorityCallQueue != null getQosLevel(param) highPriorityLevel) { priorityCallQueue.put(call); {code} where paramClass is currently ipc.Invocation isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081781#comment-13081781 ] Todd Lipcon commented on HBASE-4120: right -- we would probably want to change the queue implementation over to something more advanced like Jia Liu has done, but we already have the code there that calls out to a prioriizing function which returns an int. So, I don't think we need to subclass HBaseServer to add that. isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081783#comment-13081783 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/#review1351 --- This patch looks good. I can fix the below on commit. What I want to know is if this patch is predicated on hbase-1744. Should we get that in first? Thanks Anirudh. /src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java https://reviews.apache.org/r/1326/#comment3049 No @author tags in Apache src allowed. /src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java https://reviews.apache.org/r/1326/#comment3050 Ditto /src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java https://reviews.apache.org/r/1326/#comment3051 Ditto - Michael On 2011-08-09 06:46:26, Anirudh Todi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1326/ bq. --- bq. bq. (Updated 2011-08-09 06:46:26) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. bq. bq. bq. Summary bq. --- bq. bq. https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API bq. bq. Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. bq. With this patch, I am trying to add support for all the filters in a clean way. bq. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language bq. bq. This patch is trying to extend and further the progress made by the patches in HBASE-1744 bq. bq. There is document attached to the HBASE-4176 JIRA that describes this patch in further detail bq. bq. bq. This addresses bug HBASE-4176. bq. https://issues.apache.org/jira/browse/HBASE-4176 bq. bq. bq. Diffs bq. - bq. bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155098 bq./src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION bq. bq. Diff: https://reviews.apache.org/r/1326/diff bq. bq. bq. Testing bq. --- bq. bq. patch includes one test: TestParseFilter.java bq. bq. bq. Thanks, bq. bq. Anirudh bq. bq. Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081784#comment-13081784 ] Ted Yu commented on HBASE-4120: --- See also QoSFunction.apply() in HRegionServer: {code} // scanner methods... if (methodName.equals(next) || methodName.equals(close)) { ... if (regionName.isMetaRegion()) { // LOG.debug(High priority scanner request: + scannerId); return HIGH_QOS; } {code} Todd suggested checking against the table descriptor to determine priority for the method. isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4169) FSUtils LeaseRecovery for non HDFS FileSystems.
[ https://issues.apache.org/jira/browse/HBASE-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081787#comment-13081787 ] stack commented on HBASE-4169: -- I applied 4169-correction.txt. Thanks Lohit (You might want to make a doc patch to go along w/ this change Lohit to explain whats needed to make hbase run on mapr). FSUtils LeaseRecovery for non HDFS FileSystems. --- Key: HBASE-4169 URL: https://issues.apache.org/jira/browse/HBASE-4169 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.90.3, 0.90.4 Reporter: Lohit Vijayarenu Assignee: Lohit Vijayarenu Fix For: 0.92.0 Attachments: 4169-correction.txt, 4169-v4.txt, 4169-v5.txt, HBASE-4169.1.patch, HBASE-4169.2.patch, HBASE-4196.3.patch, HBASE-4196.3.v2.patch FSUtils.recoverFileLease uses HDFS's recoverLease method to get lease before splitting hlog file. This might not work for other filesystem implementations. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081791#comment-13081791 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/ --- (Updated 2011-08-09 17:44:58.530486) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. Changes --- Whoops - sorry. When you said add author and date to docs, were you referring to the document explaining this patch? I might have misinterpreted you. This patch doesn't depend on HBASE-1744 in any way. I am looking into how we could support this mini-language from the hbase shell. When you say you should be able to specify a filter using the full package name - does that mean the filter string can look like: org.apache.hadoop.hbase.filter.KeyOnlyFilter()? Summary --- https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in HBASE-1744 There is document attached to the HBASE-4176 JIRA that describes this patch in further detail This addresses bug HBASE-4176. https://issues.apache.org/jira/browse/HBASE-4176 Diffs (updated) - /src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION /src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION /src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155098 /src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155098 /src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION Diff: https://reviews.apache.org/r/1326/diff Testing --- patch includes one test: TestParseFilter.java Thanks, Anirudh Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase Issue Type: Improvement Components: thrift Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: Filter Language.docx, HBASE-4176.patch Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in the HBASE-1744 JIRA
[jira] [Commented] (HBASE-4147) StoreFile query usage report
[ https://issues.apache.org/jira/browse/HBASE-4147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081793#comment-13081793 ] Nicolas Spiegelberg commented on HBASE-4147: @Doug: what is your goal for this JIRA? Collecting stats on StoreFile usage is really good from a core developer perspective, but it sounds like you want better DBA tools. For example: 1) Get sampling. You just want a way to log every 1k database commands and have some collector that displays high level information on get vs put rate, with basic filtering capabilities. 2) Note that we're developing a version of show processlist for HBase that might also provide the visibility you want (HBASE-4057). 3) Another option is exporting per-CF metrics in addition to our existing per-server metrics. We have this sorta hacked up for 89 and could give you the diffs if you want to finish it off for 92. StoreFile query usage report Key: HBASE-4147 URL: https://issues.apache.org/jira/browse/HBASE-4147 Project: HBase Issue Type: Improvement Reporter: Doug Meil Priority: Minor Attachments: hbase_4147_storefilereport.pdf Detailed information on what HBase is doing in terms of reads is hard to come by. What would be useful is to have a periodic StoreFile query report. Specifically, this could run on a configured interval (e.g., every 30 seconds, 60 seconds) and dump the output to the log files. This would have all StoreFiles accessed during the reporting period (and with the Path we would also know region, CF, and table), # of times the StoreFile was accessed, the size of the StoreFile, and the total time (ms) spent processing that StoreFile. Even this level of summary would be useful to detect a which tables CFs are being accessed the most, and including the StoreFile would provide insight into relative uncompaction (i.e., lots of StoreFiles). I think the log-output, as opposed to UI, is an important facet with this. I'm assuming that users will slice and dice this data on their own so I think we should skip any kind of admin view for now (i.e., new JSPs, new APIs to expose this data). Just getting this to log-file would be a big improvement. Will this have a non-zero performance impact? Yes. Hopefully small, but yes it will. However, flying a plane without any instrumentation isn't fun. :-) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-4181: -- Affects Version/s: (was: 0.90.4) The cited code doesn't apply to 0.90 branch. HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.92.0 Reporter: Liu Jia Assignee: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.92.0 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4168) A client continues to try and connect to a powered down regionserver
[ https://issues.apache.org/jira/browse/HBASE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081796#comment-13081796 ] Ted Yu commented on HBASE-4168: --- This happened in our staging cluster this morning. System event log: {code} Tue Aug 09 2011 14:52:54System Software event: OS Stop sensor, run-time critical stop was asserted 0.10 {code} Master came down after that. Here is snippet of master log: {code} 2011-08-09 15:12:13,147 FATAL org.apache.hadoop.hbase.master.HMaster: verifyAndAssignRoot failed after10 times retries, aborting java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750) at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257) at $Proxy8.getRegionInfo(Unknown Source) at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:426) at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:473) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:91) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRootWithRetries(ServerShutdownHandler.java:110) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:163) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2011-08-09 15:12:13,147 INFO org.apache.hadoop.hbase.master.HMaster: Aborting 2011-08-09 15:12:13,147 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN java.io.IOException: Aborting at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRootWithRetries(ServerShutdownHandler.java:119) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:163) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750) at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257) at $Proxy8.getRegionInfo(Unknown Source) at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:426) at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:473) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:91) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRootWithRetries(ServerShutdownHandler.java:110) ... 5 more 2011-08-09 15:12:13,809 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads {code} A client continues to try and connect to a powered down regionserver Key: HBASE-4168 URL: https://issues.apache.org/jira/browse/HBASE-4168 Project: HBase Issue Type: Bug Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: HBASE-4168(2).patch, HBASE-4168-revised.patch, HBASE-4168.patch,
[jira] [Updated] (HBASE-4168) A client continues to try and connect to a powered down regionserver
[ https://issues.apache.org/jira/browse/HBASE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-4168: -- Priority: Critical (was: Minor) A client continues to try and connect to a powered down regionserver Key: HBASE-4168 URL: https://issues.apache.org/jira/browse/HBASE-4168 Project: HBase Issue Type: Bug Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Critical Attachments: HBASE-4168(2).patch, HBASE-4168-revised.patch, HBASE-4168.patch, hbase-hadoop-master-msgstore232.snc4.facebook.com.log Experiment-1 Started a dev cluster - META is on the same regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META is able to migrate to a new regionserver and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-2 Started a dev cluster - META is on a different regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-3 Started a dev cluster - META is on a different regionserver as my key-value. I power down the machine hosting this regionserver. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-4 (This is the problematic one) Started a dev cluster - META is on the same regionserver as my key-value. I power down the machine hosting this regionserver. The META is able to migrate to a new regionserver - however - it takes a really long time (~30 minutes) The regions on that regionserver DONOT reopen (I waited for 1 hour) The client is able to find the new location of the META, however, the META keeps redirecting the client to powered down regionserver as the location of the key-value it is trying to get. Thus the client's get is unsuccessful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081801#comment-13081801 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/#review1352 --- bq. Whoops - sorry. When you said add author and date to docs, were you referring to the document explaining this patch? I might have misinterpreted you. I should have been more clear. Yes, I was referring to your word doc. These things often get viewed outside of their surrounding context. Things like data, author, and back pointer to context are generally good to have in there. bq. This patch doesn't depend on HBASE-1744 in any way. Good. bq. I am looking into how we could support this mini-language from the hbase shell. Not important. Just a thought. bq. When you say you should be able to specify a filter using the full package name - does that mean the filter string can look like: org.apache.hadoop.hbase.filter.KeyOnlyFilter()? Yes. Not important. Can be done in a different issue but something to consider. Let me commit this. - Michael On 2011-08-09 17:44:58, Anirudh Todi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1326/ bq. --- bq. bq. (Updated 2011-08-09 17:44:58) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. bq. bq. bq. Summary bq. --- bq. bq. https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API bq. bq. Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. bq. With this patch, I am trying to add support for all the filters in a clean way. bq. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language bq. bq. This patch is trying to extend and further the progress made by the patches in HBASE-1744 bq. bq. There is document attached to the HBASE-4176 JIRA that describes this patch in further detail bq. bq. bq. This addresses bug HBASE-4176. bq. https://issues.apache.org/jira/browse/HBASE-4176 bq. bq. bq. Diffs bq. - bq. bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155098 bq./src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION bq. bq. Diff: https://reviews.apache.org/r/1326/diff bq. bq. bq. Testing bq. --- bq. bq. patch includes one test:
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081806#comment-13081806 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- bq. On 2011-08-09 18:05:11, Michael Stack wrote: bq.Whoops - sorry. When you said add author and date to docs, were you referring to the document explaining this patch? I might have misinterpreted you. bq. bq. I should have been more clear. Yes, I was referring to your word doc. These things often get viewed outside of their surrounding context. Things like data, author, and back pointer to context are generally good to have in there. bq. bq.This patch doesn't depend on HBASE-1744 in any way. bq. bq. Good. bq. bq. I am looking into how we could support this mini-language from the hbase shell. bq. bq. Not important. Just a thought. bq. bq.When you say you should be able to specify a filter using the full package name - does that mean the filter string can look like: org.apache.hadoop.hbase.filter.KeyOnlyFilter()? bq. bq. Yes. Not important. Can be done in a different issue but something to consider. bq. bq. Let me commit this. Or, hangon... this patch is incomplete? Is that so? I don't see the addition of new scannerOpenWithFilterString function to thrift? Is that to come in a later patch? - Michael --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/#review1352 --- On 2011-08-09 17:44:58, Anirudh Todi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1326/ bq. --- bq. bq. (Updated 2011-08-09 17:44:58) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. bq. bq. bq. Summary bq. --- bq. bq. https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API bq. bq. Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. bq. With this patch, I am trying to add support for all the filters in a clean way. bq. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language bq. bq. This patch is trying to extend and further the progress made by the patches in HBASE-1744 bq. bq. There is document attached to the HBASE-4176 JIRA that describes this patch in further detail bq. bq. bq. This addresses bug HBASE-4176. bq. https://issues.apache.org/jira/browse/HBASE-4176 bq. bq. bq. Diffs bq. - bq. bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java
[jira] [Updated] (HBASE-4177) Handling read failures during recovery - when HMaster calls Namenode recovery, recovery may be a failure leading to read failure while splitting logs
[ https://issues.apache.org/jira/browse/HBASE-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-4177: - Priority: Critical (was: Major) Handling read failures during recovery - when HMaster calls Namenode recovery, recovery may be a failure leading to read failure while splitting logs -- Key: HBASE-4177 URL: https://issues.apache.org/jira/browse/HBASE-4177 Project: HBase Issue Type: Bug Components: master Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Critical As per the mailing thread with the heading 'Handling read failures during recovery' we found this problem. As part of split Logs the HMaster calls Namenode recovery. The recovery is an asynchronous process. In HDFS === Even though client is getting the updated block info from Namenode on first read failure, client is discarding the new info and using the old info only to retrieve the data from datanode. So, all the read retries are failing. [Method parameter reassignment - Not reflected in caller]. In HBASE === In HMaster code we tend to wait for 1sec. But if the recovery had some failure then split log may not happen and may lead to dataloss. So may be we need to decide upon the actual delay that needs to be introduced once Hmaster calls NN recovery. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4120) isolation and allocation
[ https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081813#comment-13081813 ] Todd Lipcon commented on HBASE-4120: Ted asked me to add more detail to my comments here - sorry for being terse above. I was booted in Windows where I don't have access to the source code. In the patch on Review Board right now, PriorityHBaseServer inherits from Server to do the following: - sub out the callQueue object for a different implementation, using reflection no less. That's totally unacceptable style in my opinion -- there has to be a better way to do it. - uses reflection to get access to certain private methods of HBaseRegionServer - again unacceptable - the queue that's interjected above calls getCallPriority on each Call object in order to determine where it belongs in the queue Instead, what I'm suggesting is: - extend the QosFunction that we already have implemented inside HRegionServer.java with the new logic, and move it out to its own class, since it's much more complicated with these additions. (perhaps retain the old simple one as a default implementation, and construct the QosFunction based on a config like hbase.regionserver.rpc.prioritizer or something) - modify the existing HBaseServer implementation so that, instead of just having two queues (high and low priority) it uses a priority queue -- perhaps something like PriorityQueuePrioritizedCall, where PrioritizedCall is a wrapper around the int (returned from the qosFunction) and the original Call object, with compareTo set to compare priorities. - keep existing behavior of having multiple pools of handlers, where some handlers are reserved for high priority calls - this could either be generalized or left as is. I haven't looked much at the specific code, but I think the overall structure needs to be moved around a bit before we can get into the specific code review. isolation and allocation Key: HBASE-4120 URL: https://issues.apache.org/jira/browse/HBASE-4120 Project: HBase Issue Type: New Feature Components: master, regionserver Affects Versions: 0.90.2, 0.90.3, 0.90.4, 0.92.0 Reporter: Liu Jia Fix For: 0.90.3 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, Design_document_for_HBase_isolation_and_allocation_Revised.pdf, HBase_isolation_and_allocation_user_guide.pdf, Performance_of_Table_priority.pdf, System Structure.jpg, TablePriority.patch The HBase isolation and allocation tool is designed to help users manage cluster resource among different application and tables. When we have a large scale of HBase cluster with many applications running on it, there will be lots of problems. In Taobao there is a cluster for many departments to test their applications performance, these applications are based on HBase. With one cluster which has 12 servers, there will be only one application running exclusively on this server, and many other applications must wait until the previous test finished. After we add allocation manage function to the cluster, applications can share the cluster and run concurrently. Also if the Test Engineer wants to make sure there is no interference, he/she can move out other tables from this group. In groups we use table priority to allocate resource, when system is busy; we can make sure high-priority tables are not affected lower-priority tables Different groups can have different region server configurations, some groups optimized for reading can have large block cache size, and others optimized for writing can have large memstore size. Tables and region servers can be moved easily between groups; after changing the configuration, a group can be restarted alone instead of restarting the whole cluster. git entry : https://github.com/ICT-Ope/HBase_allocation . We hope our work is helpful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081820#comment-13081820 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/#review1357 --- I hesitate to bring this up, but: do we really want to include a hand-written lexer/parser here? Why not build on something like javacc or antlr? I fear this will be difficult to maintain or extend as is, and as a user facing API, once we commit it, we're stuck with it. - Todd On 2011-08-09 17:44:58, Anirudh Todi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1326/ bq. --- bq. bq. (Updated 2011-08-09 17:44:58) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. bq. bq. bq. Summary bq. --- bq. bq. https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API bq. bq. Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. bq. With this patch, I am trying to add support for all the filters in a clean way. bq. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language bq. bq. This patch is trying to extend and further the progress made by the patches in HBASE-1744 bq. bq. There is document attached to the HBASE-4176 JIRA that describes this patch in further detail bq. bq. bq. This addresses bug HBASE-4176. bq. https://issues.apache.org/jira/browse/HBASE-4176 bq. bq. bq. Diffs bq. - bq. bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155098 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155098 bq./src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155098 bq./src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION bq. bq. Diff: https://reviews.apache.org/r/1326/diff bq. bq. bq. Testing bq. --- bq. bq. patch includes one test: TestParseFilter.java bq. bq. bq. Thanks, bq. bq. Anirudh bq. bq. Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase Issue Type: Improvement Components: thrift Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: Filter Language.docx, HBASE-4176.patch Currently, to use any of the filters, one has to explicitly add a scanner for the
[jira] [Commented] (HBASE-4173) developer.xml - changing the chapter id because it had the same name as the build chapter
[ https://issues.apache.org/jira/browse/HBASE-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081821#comment-13081821 ] stack commented on HBASE-4173: -- +1 post-commit developer.xml - changing the chapter id because it had the same name as the build chapter - Key: HBASE-4173 URL: https://issues.apache.org/jira/browse/HBASE-4173 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: developer_HBASE_4173.xml.patch I just realized that the developer chapter's xml:id was build - and so was the build chapter's xml:id. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4172) hbase book - laundry list of changes (8-6-2011)
[ https://issues.apache.org/jira/browse/HBASE-4172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081825#comment-13081825 ] stack commented on HBASE-4172: -- +1 post-commit (nice doc Doug) hbase book - laundry list of changes (8-6-2011) --- Key: HBASE-4172 URL: https://issues.apache.org/jira/browse/HBASE-4172 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: docbkx_HBASE_4172.patch Laundry list of changes: * added to 'no reducer' sub-section in performance chapter that it doesn't apply when doing summarization and when the reducer doing the writing. * new sub-section in config section about manually managed major compactions. we never actually had that anywhere in the book, although everybody does it. * arch/client - new sub-section of htablepool for high-concurrency situations. * sub-section under 'supported datatypes' for counters (although people use this feature a lot, it's not in any of the docs) * fixed typo in arch/client (changed -ROOT to -ROOT-) * added more in developing hbase section (more on code-style, added a codelines sub-section, and a few other nits). -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4183) FSUtils checkFileSystem() should not close the FileSystem by default
FSUtils checkFileSystem() should not close the FileSystem by default Key: HBASE-4183 URL: https://issues.apache.org/jira/browse/HBASE-4183 Project: HBase Issue Type: Bug Reporter: Pritam Damania The checkFileSystem() function in FSUtils closes down the FileSystem for the HRegionServer by default if the FileSystem is not available. Ideally we should let the the HRegionServer threads exit and then shutdown the FileSystem. The checkFileSystem() function should not by default kill the FileSystem. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081841#comment-13081841 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/ --- (Updated 2011-08-09 18:59:28.048034) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. Changes --- Oops - forgot to add that (done now) I have used thrift-0.5 to generate Hbase.java - I don't know if this is the version of Thrift I should have used. It builds fine and passes all the tests in TestParseFilter.java in my development environment. @Todd - this was discussed internally. However, it was decided that it would be too heavy-weight for this patch. However, I defer it to your opinion Summary --- https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in HBASE-1744 There is document attached to the HBASE-4176 JIRA that describes this patch in further detail This addresses bug HBASE-4176. https://issues.apache.org/jira/browse/HBASE-4176 Diffs (updated) - /src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION /src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION /src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155450 /src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java 1155450 /src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java 1155450 /src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift 1155450 /src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION Diff: https://reviews.apache.org/r/1326/diff Testing --- patch includes one test: TestParseFilter.java Thanks, Anirudh Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase Issue Type: Improvement Components: thrift Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: Filter Language.docx, HBASE-4176.patch Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in
[jira] [Updated] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh Todi updated HBASE-4176: Attachment: Filter Language(2).docx Updated Filter Language document to include - author, date and a reference to this JIRA Exposing HBase Filters to the Thrift API Key: HBASE-4176 URL: https://issues.apache.org/jira/browse/HBASE-4176 Project: HBase Issue Type: Improvement Components: thrift Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Minor Attachments: Filter Language(2).docx, Filter Language.docx, HBASE-4176.patch Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in the HBASE-1744 JIRA (https://issues.apache.org/jira/browse/HBASE-1744) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4181) HConnectionManager can't find cached HRegionInterface makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081844#comment-13081844 ] Hudson commented on HBASE-4181: --- Integrated in HBase-TRUNK #2100 (See [https://builds.apache.org/job/HBase-TRUNK/2100/]) HBASE-4181 HConnectionManager can't find cached HRegionInterface and makes clients work very slow stack : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/trunk/CHANGES.txt HConnectionManager can't find cached HRegionInterface makes client very slow Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.92.0 Reporter: Liu Jia Assignee: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.92.0 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081853#comment-13081853 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- bq. On 2011-08-09 18:28:06, Todd Lipcon wrote: bq. I hesitate to bring this up, but: do we really want to include a hand-written lexer/parser here? Why not build on something like javacc or antlr? I fear this will be difficult to maintain or extend as is, and as a user facing API, once we commit it, we're stuck with it. This is a valid point. I'm inclined toward committing what we have here. Minor bug fixes should be easy enough since there are tests that can be extended easy enough to cover bugs found. If maintenance going forward is an issue, could redo w/ parser generator in separate issue. - Michael --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/#review1357 --- On 2011-08-09 18:59:28, Anirudh Todi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1326/ bq. --- bq. bq. (Updated 2011-08-09 18:59:28) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. bq. bq. bq. Summary bq. --- bq. bq. https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API bq. bq. Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. bq. With this patch, I am trying to add support for all the filters in a clean way. bq. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language bq. bq. This patch is trying to extend and further the progress made by the patches in HBASE-1744 bq. bq. There is document attached to the HBASE-4176 JIRA that describes this patch in further detail bq. bq. bq. This addresses bug HBASE-4176. bq. https://issues.apache.org/jira/browse/HBASE-4176 bq. bq. bq. Diffs bq. - bq. bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java 1155450 bq./src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift 1155450 bq./src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION bq. bq. Diff: https://reviews.apache.org/r/1326/diff bq. bq. bq. Testing bq. --- bq. bq. patch includes one test:
[jira] [Commented] (HBASE-4015) Refactor the TimeoutMonitor to make it less racy
[ https://issues.apache.org/jira/browse/HBASE-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081859#comment-13081859 ] stack commented on HBASE-4015: -- I like this diagram of yours Ram. Do we need new RS_ALLOCATE state? Could we just have OFFLINE plus your suggestion of adding RS name so its OFFLINE+RS_TO_OPEN_REGION_ON? What happens if we assign the region back to RS1 (it can happen). Refactor the TimeoutMonitor to make it less racy Key: HBASE-4015 URL: https://issues.apache.org/jira/browse/HBASE-4015 Project: HBase Issue Type: Sub-task Affects Versions: 0.90.3 Reporter: Jean-Daniel Cryans Assignee: ramkrishna.s.vasudevan Priority: Blocker Fix For: 0.92.0 Attachments: Timeoutmonitor with state diagrams.pdf The current implementation of the TimeoutMonitor acts like a race condition generator, mostly making things worse rather than better. It does it's own thing for a while without caring for what's happening in the rest of the master. The first thing that needs to happen is that the regions should not be processed in one big batch, because that sometimes can take minutes to process (meanwhile a region that timed out opening might have opened, then what happens is it will be reassigned by the TimeoutMonitor generating the never ending PENDING_OPEN situation). Those operations should also be done more atomically, although I'm not sure how to do it in a scalable way in this case. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4170) createTable java doc needs to be improved
[ https://issues.apache.org/jira/browse/HBASE-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081860#comment-13081860 ] stack commented on HBASE-4170: -- Would you mind making a patch Mubarak? Thank you. createTable java doc needs to be improved - Key: HBASE-4170 URL: https://issues.apache.org/jira/browse/HBASE-4170 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.90.1 Environment: HBase-0.90.1 Reporter: Mubarak Seyed Fix For: 0.90.1 HBaseAdmin.createTable() java doc says public void createTable(HTableDescriptor desc, byte[][] splitKeys) throws IOException Creates a new table with an initial set of empty regions defined by the specified split keys. The total number of regions created will be the number of split keys plus one (the first region has a null start key and the last region has a null end key). Synchronous operation. If we specify null values for first region start key and last region end key, geting NullPointerException as Arrays.sort compares each element. I guess the documentation should not talk about null values and explain about splitKeys[][] length as n-1, where n is number of regions. splitKeys[][] would look like splitKeys[0] = key value 1 .. splitKeys[n-1] = key value n-1 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4161) Incorrect use of listStatus() in HBase region initialization.
[ https://issues.apache.org/jira/browse/HBASE-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081863#comment-13081863 ] stack commented on HBASE-4161: -- Is this patch for hbase TRUNK? We seem to have a check for dir existence already in TRUNK. Incorrect use of listStatus() in HBase region initialization. - Key: HBASE-4161 URL: https://issues.apache.org/jira/browse/HBASE-4161 Project: HBase Issue Type: Bug Components: regionserver Reporter: Pritam Damania Attachments: 0001-Fix-FileNotFoundException-in-HLog.java.patch While opening a region, the HBase regionserver tries to list all the children in a recovered.edits directory. This directory may not exist and depending on the version of HDFS listStatus() might return null or an exception. If it does throw an exception the entire process of opening the region is aborted, just because the recovered.edits directory is not present. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4114) Metrics for HFile HDFS block locality
[ https://issues.apache.org/jira/browse/HBASE-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HBASE-4114: --- Attachment: HBASE-4114-trunk.patch Thanks, Stack. Here is the fix for everything except what if two regionservers running on same host HostAndWeight is used to capture general HDFS block distribution, these are datanode hosts; hbase isn't involved.. RS at runtime will query HostAndWeight with its own host name. If there are two RS instances on the same host, each will has its own hfiles instances' HostAndWeight and aggregate them independently. Metrics for HFile HDFS block locality - Key: HBASE-4114 URL: https://issues.apache.org/jira/browse/HBASE-4114 Project: HBase Issue Type: Improvement Components: metrics, regionserver Reporter: Ming Ma Assignee: Ming Ma Attachments: HBASE-4114-trunk.patch, HBASE-4114-trunk.patch, HBASE-4114-trunk.patch, HBASE-4114-trunk.patch, HBASE-4114-trunk.patch, HBASE-4114-trunk.patch Normally, when we put hbase and HDFS in the same cluster ( e.g., region server runs on the datenode ), we have a reasonably good data locality, as explained by Lars. Also Work has been done by Jonathan to address the startup situation. There are scenarios where regions can be on a different machine from the machines that hold the underlying HFile blocks, at least for some period of time. This will have performance impact on whole table scan operation and map reduce job during that time. 1.After load balancer moves the region and before compaction (thus generate HFile on the new region server ) on that region, HDFS block can be remote. 2.When a new machine is added, or removed, Hbase's region assignment policy is different from HDFS's block reassignment policy. 3.Even if there is no much hbase activity, HDFS can load balance HFile blocks as other non-hbase applications push other data to HDFS. Lots has been or will be done in load balancer, as summarized by Ted. I am curious if HFile HDFS block locality should be used as another factor here. I have done some experiments on how HDFS block locality can impact map reduce latency. First we need to define a metrics to measure HFile data locality. Metrics defintion: For a given table, or a region server, or a region, we can define the following. The higher the value, the more local HFile is from region server's point of view. HFile locality index = ( Total number of HDFS blocks that can be retrieved locally by the region server ) / ( Total number of HDFS blocks for all HFiles ) Test Results: This is to show how HFile locality can impact the latency. It is based on a table with 1M rows, 36KB per row; regions are distributed in balance. The map job is RowCounter. HFile Locality Index Map job latency ( in sec ) 28% 157 36% 150 47% 142 61% 133 73% 122 89% 103 99% 95 So the first suggestion is to expose HFile locality index as a new region server metrics. It will be ideal if we can somehow measure HFile locality index on a per map job level. Regarding if/when we should include that as another factor for load balancer, that will be a different work item. It is unclear how load balancer can take various factors into account to come up with the best load balancer strategy. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2077) NullPointerException with an open scanner that expired causing an immediate region server shutdown
[ https://issues.apache.org/jira/browse/HBASE-2077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081881#comment-13081881 ] Todd Lipcon commented on HBASE-2077: This is long since committed, but just a request: In the future could we open separate JIRAs rather than doing a part 2 when the commits are more than a day apart? It's very difficult to figure out what went on in the history of this JIRA, since it was committed for 0.20 in Dec '09, briefly amended in Feb '10, amendation partially reverted the next day, and then another change in Jun '11 for 0.90.4 to solve an entirely different bug than the description indicates. This makes it very difficult to support past branches or maintain distributions, since it appears this was fixed long ago but in fact 0.90.3 lacks a major part of the JIRA. NullPointerException with an open scanner that expired causing an immediate region server shutdown -- Key: HBASE-2077 URL: https://issues.apache.org/jira/browse/HBASE-2077 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.20.2, 0.20.3 Environment: Hadoop 0.20.0, Mac OS X, Java 6 Reporter: Sam Pullara Assignee: Sam Pullara Priority: Critical Fix For: 0.90.4 Attachments: 2077-suggestion.txt, 2077-v4.txt, HBASE-2077-3.patch, HBASE-2077-redux.patch, [Bug_HBASE-2077]_Fixes_a_very_rare_race_condition_between_lease_expiration_and_renewal.patch Original Estimate: 1h Remaining Estimate: 1h 2009-12-29 18:05:55,432 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -4250070597157694417 lease expired 2009-12-29 18:05:55,443 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: java.lang.NullPointerException at org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1310) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:136) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:127) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:117) at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641) at java.util.PriorityQueue.siftDown(PriorityQueue.java:612) at java.util.PriorityQueue.poll(PriorityQueue.java:523) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:113) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.nextInternal(HRegion.java:1776) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1719) at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1944) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:648) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) 2009-12-29 18:05:55,446 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 55260, call next(-4250070597157694417, 1) from 192.168.1.90:54011: error: java.io.IOException: java.lang.NullPointerException java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:869) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:859) at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1965) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:648) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1310) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:136) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:127) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:117) at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641) at
[jira] [Updated] (HBASE-4168) A client continues to try and connect to a powered down regionserver
[ https://issues.apache.org/jira/browse/HBASE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh Todi updated HBASE-4168: Attachment: HBASE-4168(5).patch Updated - HBASE-4168(5).patch A client continues to try and connect to a powered down regionserver Key: HBASE-4168 URL: https://issues.apache.org/jira/browse/HBASE-4168 Project: HBase Issue Type: Bug Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Critical Attachments: HBASE-4168(2).patch, HBASE-4168(3).patch, HBASE-4168(4).patch, HBASE-4168(5).patch, HBASE-4168-revised.patch, HBASE-4168.patch, hbase-hadoop-master-msgstore232.snc4.facebook.com.log Experiment-1 Started a dev cluster - META is on the same regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META is able to migrate to a new regionserver and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-2 Started a dev cluster - META is on a different regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-3 Started a dev cluster - META is on a different regionserver as my key-value. I power down the machine hosting this regionserver. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-4 (This is the problematic one) Started a dev cluster - META is on the same regionserver as my key-value. I power down the machine hosting this regionserver. The META is able to migrate to a new regionserver - however - it takes a really long time (~30 minutes) The regions on that regionserver DONOT reopen (I waited for 1 hour) The client is able to find the new location of the META, however, the META keeps redirecting the client to powered down regionserver as the location of the key-value it is trying to get. Thus the client's get is unsuccessful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2077) NullPointerException with an open scanner that expired causing an immediate region server shutdown
[ https://issues.apache.org/jira/browse/HBASE-2077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081898#comment-13081898 ] stack commented on HBASE-2077: -- Sorry Todd. Will be better going forward. NullPointerException with an open scanner that expired causing an immediate region server shutdown -- Key: HBASE-2077 URL: https://issues.apache.org/jira/browse/HBASE-2077 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.20.2, 0.20.3 Environment: Hadoop 0.20.0, Mac OS X, Java 6 Reporter: Sam Pullara Assignee: Sam Pullara Priority: Critical Fix For: 0.90.4 Attachments: 2077-suggestion.txt, 2077-v4.txt, HBASE-2077-3.patch, HBASE-2077-redux.patch, [Bug_HBASE-2077]_Fixes_a_very_rare_race_condition_between_lease_expiration_and_renewal.patch Original Estimate: 1h Remaining Estimate: 1h 2009-12-29 18:05:55,432 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -4250070597157694417 lease expired 2009-12-29 18:05:55,443 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: java.lang.NullPointerException at org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1310) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:136) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:127) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:117) at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641) at java.util.PriorityQueue.siftDown(PriorityQueue.java:612) at java.util.PriorityQueue.poll(PriorityQueue.java:523) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:113) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.nextInternal(HRegion.java:1776) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1719) at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1944) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:648) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) 2009-12-29 18:05:55,446 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 55260, call next(-4250070597157694417, 1) from 192.168.1.90:54011: error: java.io.IOException: java.lang.NullPointerException java.io.IOException: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:869) at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:859) at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1965) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:648) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1310) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:136) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:127) at org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:117) at java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641) at java.util.PriorityQueue.siftDown(PriorityQueue.java:612) at java.util.PriorityQueue.poll(PriorityQueue.java:523) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:113) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.nextInternal(HRegion.java:1776) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1719) at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1944) ... 5 more 2009-12-29 18:05:55,447 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call next(-4250070597157694417, 1) from
[jira] [Commented] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081902#comment-13081902 ] jirapos...@reviews.apache.org commented on HBASE-4027: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1214/ --- (Updated 2011-08-09 20:39:55.719673) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, Jonathan Gray, and Li Pi. Changes --- Fixed compilation error. Corrected package location of cachetestutils. Summary --- Review request - I apparently can't edit tlipcon's earlier posting of my diff, so creating a new one. This addresses bug HBase-4027. https://issues.apache.org/jira/browse/HBase-4027 Diffs (updated) - CHANGES.txt e9c0478 conf/hbase-env.sh 2d55d27 src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java aa09b7d src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java 2d4002c src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 097dc50 src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java 88aa652 src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java 886c31d src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabItemEvictionWatcher.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 86652c0 src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java 94c8bb4 src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java PRE-CREATION src/main/java/org/apache/hadoop/hbase/util/FSMapRUtils.java e70b0d4 src/test/java/org/apache/hadoop/hbase/io/hfile/HFileBlockCacheTestUtils.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/SingleSizeCacheTestUtils.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSingleSizeCache.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlab.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlabCache.java PRE-CREATION src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java 4387170 Diff: https://reviews.apache.org/r/1214/diff Testing --- Ran benchmarks against it in HBase standalone mode. Wrote test cases for all classes, multithreaded test cases exist for the cache. Thanks, Li Enable direct byte buffers LruBlockCache Key: HBASE-4027 URL: https://issues.apache.org/jira/browse/HBASE-4027 Project: HBase Issue Type: Improvement Reporter: Jason Rutherglen Assignee: Li Pi Priority: Minor Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.5.diff, hbase-4027-v10.diff, hbase-4027v6.diff, slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, slabcachepatchv4.diff Java offers the creation of direct byte buffers which are allocated outside of the heap. They need to be manually free'd, which can be accomplished using an documented {{clean}} method. The feature will be optional. After implementing, we can benchmark for differences in speed and garbage collection observances. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Pi updated HBASE-4027: - Attachment: hbase-4027v10.6.diff Moved HFileBlockCacheTestUtils/SingleSizeCacheTestUtils to the tests folder, so it builds under mvn. Enable direct byte buffers LruBlockCache Key: HBASE-4027 URL: https://issues.apache.org/jira/browse/HBASE-4027 Project: HBase Issue Type: Improvement Reporter: Jason Rutherglen Assignee: Li Pi Priority: Minor Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.5.diff, hbase-4027-v10.diff, hbase-4027v10.6.diff, hbase-4027v6.diff, slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, slabcachepatchv4.diff Java offers the creation of direct byte buffers which are allocated outside of the heap. They need to be manually free'd, which can be accomplished using an documented {{clean}} method. The feature will be optional. After implementing, we can benchmark for differences in speed and garbage collection observances. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4178) Use of Random.nextLong() in HRegionServer.addScanner(...)
[ https://issues.apache.org/jira/browse/HBASE-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081908#comment-13081908 ] Lars Hofhansl commented on HBASE-4178: -- So in summary an AtomicLong that resets (naturally) when the region server restarts should work, but it is not clear that this is actually a worthwhile problem to fix. I am happy to do the trivial AtomicLong fix, or to just close issue... Leaning towards the latter. Use of Random.nextLong() in HRegionServer.addScanner(...) - Key: HBASE-4178 URL: https://issues.apache.org/jira/browse/HBASE-4178 Project: HBase Issue Type: Bug Affects Versions: 0.90.3 Reporter: Lars Hofhansl Priority: Minor ScannerIds are currently assigned by getting a random long. While it would be a rare occurrence that two scanners received the same ids on the same region server the results would seem to be... Bad. A client scanner would get results from a different server scanner, and maybe only from some of the region servers. A safer approach would be using an AtomicLong. We do not have to worry about running of numbers: If we got 1 scanners per second it'd take 2.9m years to reach 2^63. Then again the same reasoning would imply that this collisions would be happening too rarely to be of concern (assuming a good random number generator). So maybe this is a none-issue. AtomicLong would also imply a minor performance hit on multi core machines, as it would force a memory barrier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4178) Use of Random.nextLong() in HRegionServer.addScanner(...)
[ https://issues.apache.org/jira/browse/HBASE-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081935#comment-13081935 ] stack commented on HBASE-4178: -- We have enough open ones already. Use of Random.nextLong() in HRegionServer.addScanner(...) - Key: HBASE-4178 URL: https://issues.apache.org/jira/browse/HBASE-4178 Project: HBase Issue Type: Bug Affects Versions: 0.90.3 Reporter: Lars Hofhansl Priority: Minor ScannerIds are currently assigned by getting a random long. While it would be a rare occurrence that two scanners received the same ids on the same region server the results would seem to be... Bad. A client scanner would get results from a different server scanner, and maybe only from some of the region servers. A safer approach would be using an AtomicLong. We do not have to worry about running of numbers: If we got 1 scanners per second it'd take 2.9m years to reach 2^63. Then again the same reasoning would imply that this collisions would be happening too rarely to be of concern (assuming a good random number generator). So maybe this is a none-issue. AtomicLong would also imply a minor performance hit on multi core machines, as it would force a memory barrier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4178) Use of Random.nextLong() in HRegionServer.addScanner(...)
[ https://issues.apache.org/jira/browse/HBASE-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081934#comment-13081934 ] stack commented on HBASE-4178: -- Close is fine Lars. Use of Random.nextLong() in HRegionServer.addScanner(...) - Key: HBASE-4178 URL: https://issues.apache.org/jira/browse/HBASE-4178 Project: HBase Issue Type: Bug Affects Versions: 0.90.3 Reporter: Lars Hofhansl Priority: Minor ScannerIds are currently assigned by getting a random long. While it would be a rare occurrence that two scanners received the same ids on the same region server the results would seem to be... Bad. A client scanner would get results from a different server scanner, and maybe only from some of the region servers. A safer approach would be using an AtomicLong. We do not have to worry about running of numbers: If we got 1 scanners per second it'd take 2.9m years to reach 2^63. Then again the same reasoning would imply that this collisions would be happening too rarely to be of concern (assuming a good random number generator). So maybe this is a none-issue. AtomicLong would also imply a minor performance hit on multi core machines, as it would force a memory barrier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081955#comment-13081955 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/#review1365 --- /src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java https://reviews.apache.org/r/1326/#comment3084 Preconditions instead? /src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java https://reviews.apache.org/r/1326/#comment3090 Preconditions? /src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java https://reviews.apache.org/r/1326/#comment3101 Indentation? Not your change, but should be fixed anyways. - Li On 2011-08-09 18:59:28, Anirudh Todi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1326/ bq. --- bq. bq. (Updated 2011-08-09 18:59:28) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. bq. bq. bq. Summary bq. --- bq. bq. https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API bq. bq. Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. bq. With this patch, I am trying to add support for all the filters in a clean way. bq. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language bq. bq. This patch is trying to extend and further the progress made by the patches in HBASE-1744 bq. bq. There is document attached to the HBASE-4176 JIRA that describes this patch in further detail bq. bq. bq. This addresses bug HBASE-4176. bq. https://issues.apache.org/jira/browse/HBASE-4176 bq. bq. bq. Diffs bq. - bq. bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java PRE-CREATION bq./src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java 1155450 bq. /src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java 1155450 bq./src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java 1155450 bq./src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift 1155450 bq./src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java PRE-CREATION bq. bq. Diff: https://reviews.apache.org/r/1326/diff bq. bq. bq. Testing bq. --- bq. bq. patch includes one test: TestParseFilter.java bq. bq. bq. Thanks, bq. bq. Anirudh bq. bq. Exposing HBase Filters to the Thrift API
[jira] [Commented] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13081962#comment-13081962 ] jirapos...@reviews.apache.org commented on HBASE-4027: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1214/#review1363 --- CHANGES.txt https://reviews.apache.org/r/1214/#comment3063 your diff appears to revert this change. perhaps you need to rebase on trunk before you take diff against it. src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java https://reviews.apache.org/r/1214/#comment3064 style: /** * ... */ public class CacheStates { (comment formatting and space before '{') src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java https://reviews.apache.org/r/1214/#comment3068 hrm, is this constructor ever meant to be used? If the off-heap cache isn't configured, then it should just instantiate LruBlockCache directly, no? src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java https://reviews.apache.org/r/1214/#comment3069 does it ever make sense to have offHeapSize onHeapSize? Perhaps we should have a Preconditions check here? src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java https://reviews.apache.org/r/1214/#comment3065 hyphenate 'on-heap' and 'off-heap' for clarity src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java https://reviews.apache.org/r/1214/#comment3066 missing space - bytes ... src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java https://reviews.apache.org/r/1214/#comment3067 same as above src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java https://reviews.apache.org/r/1214/#comment3070 we should add in the heap size used by the accounting and hashmaps in the off-heap cache as well. src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java https://reviews.apache.org/r/1214/#comment3071 vertically collapse this - one line per param src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java https://reviews.apache.org/r/1214/#comment3072 when you check up front here, you end up doing two lookups in backingmap. Since this is just a safety check, you could instead check the return value of put() below. Something like: ByteBuffer storedBlock = ...allloc ... fill it in... ByteBuffer alreadyCached = backingMap.put(blockName, storedBlock); if (alreadyCached != null) { // we didn't insert the new one, so free it and throw an exception backingStore.free(storedBlock); throw new RuntimeException(already cached x); } make sense? src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java https://reviews.apache.org/r/1214/#comment3073 I think there's a bug here if you have multiple users hammering the same contentBlock -- two people can get to rewind() at the same time. You probably need synchronized(contentBlock) around these two lines. See if you can add a unit test which puts just one block in the cache and starts several threads which hammer it - I bet you eventually one of the blocks comes back returned as all 0x src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java https://reviews.apache.org/r/1214/#comment3074 this.size() is in units of bytes, not blocks src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java https://reviews.apache.org/r/1214/#comment3075 maybe rename to getOccupiedSize? src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java https://reviews.apache.org/r/1214/#comment3076 wrong Log class - should use org.apache.commons.logging.Log src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java https://reviews.apache.org/r/1214/#comment3077 remove extra whitespace src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java https://reviews.apache.org/r/1214/#comment3078 hm, we have 4 different terms for these: buffers, items, chunks, and blocks. Can we have a terminology that's used consistently throughout? src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java https://reviews.apache.org/r/1214/#comment3079 LOG.warn(Shutdown failed!, e); is probably what you want. Also improve the text of this error message -- eg Unable to deallocate direct memory during shutdown. src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java https://reviews.apache.org/r/1214/#comment3080 getBlock*s*Remaining, right? src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java https://reviews.apache.org/r/1214/#comment3081 incomplete comment here
[jira] [Updated] (HBASE-4155) the problem in hbase thrift client when scan/get rows by timestamp
[ https://issues.apache.org/jira/browse/HBASE-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-4155: - Status: Patch Available (was: Open) the problem in hbase thrift client when scan/get rows by timestamp -- Key: HBASE-4155 URL: https://issues.apache.org/jira/browse/HBASE-4155 Project: HBase Issue Type: Bug Components: thrift Affects Versions: 0.90.0 Reporter: zezhou Attachments: 4155.txt, patch.txt, patch.txt.svn Original Estimate: 1m Remaining Estimate: 1m I want to scan rows by specified timestamp. I use following hbase shell command : scan 'testcrawl',{TIMESTAMP=1312268202071} ROW COLUMN+CELL put1.com column=crawl:data, timestamp=1312268202071, value=htmlput1/html put1.com column=crawl:type, timestamp=1312268202071, value=html put1.com column=links:outlinks, timestamp=1312268202071, value=www.163.com;www.sina.com As I expected, I can get the rows which timestamp is 1312268202071. But when I use thift client to do the same thing ,the return data is the rows which time before specified timestamp , not the same as hbase shell.following is timestamp of return data: 131217917 1312268202059 I look up the source in hbase/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java, it use following code to set time parameter . scan.setTimeRange(Long.MIN_VALUE, timestamp); This cause thrift client return rows before specified row ,not the rows timestamp specified. But in hbase client and avro client ,it use following code to set time parameter. scan.setTimeStamp(timestamp); this will return rows timestamp specified. Is this a feature or a bug in thrift client ? if this is a feature, which method in thrift client can get the rows by specified timestamp? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4156) ZKConfig defaults clientPort improperly
[ https://issues.apache.org/jira/browse/HBASE-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-4156. -- Resolution: Fixed Fix Version/s: 0.92.0 Hadoop Flags: [Reviewed] Committed to TRUNK. Took me a while to understand but makes sense. Thanks for the patch Michajlo ZKConfig defaults clientPort improperly --- Key: HBASE-4156 URL: https://issues.apache.org/jira/browse/HBASE-4156 Project: HBase Issue Type: Bug Components: zookeeper Reporter: Michajlo Matijkiw Priority: Trivial Fix For: 0.92.0 Attachments: clientPort-default-fix.patch ZKConfig#makeZKProps() should use clientPort as the client port key in its output when defaulting instead of hbase.zookeeper.property.clientPort. This method strips the hbase.zookeeper.property. prefix from all of the properties it returns, so the client port key should not have it. The result is that the default is not properly picked up. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4178) Use of Random.nextLong() in HRegionServer.addScanner(...)
[ https://issues.apache.org/jira/browse/HBASE-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-4178: - Assignee: Lars Hofhansl Use of Random.nextLong() in HRegionServer.addScanner(...) - Key: HBASE-4178 URL: https://issues.apache.org/jira/browse/HBASE-4178 Project: HBase Issue Type: Bug Affects Versions: 0.90.3 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor ScannerIds are currently assigned by getting a random long. While it would be a rare occurrence that two scanners received the same ids on the same region server the results would seem to be... Bad. A client scanner would get results from a different server scanner, and maybe only from some of the region servers. A safer approach would be using an AtomicLong. We do not have to worry about running of numbers: If we got 1 scanners per second it'd take 2.9m years to reach 2^63. Then again the same reasoning would imply that this collisions would be happening too rarely to be of concern (assuming a good random number generator). So maybe this is a none-issue. AtomicLong would also imply a minor performance hit on multi core machines, as it would force a memory barrier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4178) Use of Random.nextLong() in HRegionServer.addScanner(...)
[ https://issues.apache.org/jira/browse/HBASE-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl resolved HBASE-4178. -- Resolution: Won't Fix Use of Random.nextLong() in HRegionServer.addScanner(...) - Key: HBASE-4178 URL: https://issues.apache.org/jira/browse/HBASE-4178 Project: HBase Issue Type: Bug Affects Versions: 0.90.3 Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor ScannerIds are currently assigned by getting a random long. While it would be a rare occurrence that two scanners received the same ids on the same region server the results would seem to be... Bad. A client scanner would get results from a different server scanner, and maybe only from some of the region servers. A safer approach would be using an AtomicLong. We do not have to worry about running of numbers: If we got 1 scanners per second it'd take 2.9m years to reach 2^63. Then again the same reasoning would imply that this collisions would be happening too rarely to be of concern (assuming a good random number generator). So maybe this is a none-issue. AtomicLong would also imply a minor performance hit on multi core machines, as it would force a memory barrier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3287) Add option to cache blocks on hfile write and evict blocks on hfile close
[ https://issues.apache.org/jira/browse/HBASE-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082080#comment-13082080 ] Yi Liang commented on HBASE-3287: - @stack You mean there's something wrong with this patch? So what should we do if we want to enable this option? Use the code from trunk? IMO a correct patch against 0.90 will be very useful until 0.92.0 releases as this feature is so critical for online service. Add option to cache blocks on hfile write and evict blocks on hfile close - Key: HBASE-3287 URL: https://issues.apache.org/jira/browse/HBASE-3287 Project: HBase Issue Type: New Feature Components: io, regionserver Affects Versions: 0.90.0 Reporter: Jonathan Gray Assignee: Jonathan Gray Fix For: 0.92.0 Attachments: HBASE-3287-FINAL-trunk.patch This issue is about adding configuration options to add/remove from the block cache when creating/closing files. For use cases with lots of flushing and compacting, this might be desirable to prevent cache misses and maximize the effective utilization of total block cache capacity. The first option, {{hbase.rs.cacheblocksonwrite}}, will make it so we pre-cache blocks as we are writing out new files. The second option, {{hbase.rs.evictblocksonclose}}, will make it so we evict blocks when files are closed. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4168) A client continues to try and connect to a powered down regionserver
[ https://issues.apache.org/jira/browse/HBASE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082081#comment-13082081 ] Ted Yu commented on HBASE-4168: --- +1 on patch version 6. Indentation for the following shouldn't be changed: {code} + cause.getMessage().contains(Connection reset)) { {code} A client continues to try and connect to a powered down regionserver Key: HBASE-4168 URL: https://issues.apache.org/jira/browse/HBASE-4168 Project: HBase Issue Type: Bug Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Critical Attachments: HBASE-4168(2).patch, HBASE-4168(3).patch, HBASE-4168(4).patch, HBASE-4168(5).patch, HBASE-4168(6).patch, HBASE-4168-revised.patch, HBASE-4168.patch, hbase-hadoop-master-msgstore232.snc4.facebook.com.log Experiment-1 Started a dev cluster - META is on the same regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META is able to migrate to a new regionserver and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-2 Started a dev cluster - META is on a different regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-3 Started a dev cluster - META is on a different regionserver as my key-value. I power down the machine hosting this regionserver. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-4 (This is the problematic one) Started a dev cluster - META is on the same regionserver as my key-value. I power down the machine hosting this regionserver. The META is able to migrate to a new regionserver - however - it takes a really long time (~30 minutes) The regions on that regionserver DONOT reopen (I waited for 1 hour) The client is able to find the new location of the META, however, the META keeps redirecting the client to powered down regionserver as the location of the key-value it is trying to get. Thus the client's get is unsuccessful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4027) Enable direct byte buffers LruBlockCache
[ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082087#comment-13082087 ] jirapos...@reviews.apache.org commented on HBASE-4027: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1214/#review1371 --- src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java https://reviews.apache.org/r/1214/#comment3155 I think this was intended for use by hbase-env.sh - Ted On 2011-08-09 20:39:55, Li Pi wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1214/ bq. --- bq. bq. (Updated 2011-08-09 20:39:55) bq. bq. bq. Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, Jonathan Gray, and Li Pi. bq. bq. bq. Summary bq. --- bq. bq. Review request - I apparently can't edit tlipcon's earlier posting of my diff, so creating a new one. bq. bq. bq. This addresses bug HBase-4027. bq. https://issues.apache.org/jira/browse/HBase-4027 bq. bq. bq. Diffs bq. - bq. bq.CHANGES.txt e9c0478 bq.conf/hbase-env.sh 2d55d27 bq.src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java aa09b7d bq.src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java 2d4002c bq.src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 097dc50 bq.src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java 88aa652 bq.src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java 886c31d bq.src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java PRE-CREATION bq. src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabItemEvictionWatcher.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 86652c0 bq.src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java 94c8bb4 bq.src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/util/FSMapRUtils.java e70b0d4 bq. src/test/java/org/apache/hadoop/hbase/io/hfile/HFileBlockCacheTestUtils.java PRE-CREATION bq. src/test/java/org/apache/hadoop/hbase/io/hfile/SingleSizeCacheTestUtils.java PRE-CREATION bq. src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSingleSizeCache.java PRE-CREATION bq.src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlab.java PRE-CREATION bq.src/test/java/org/apache/hadoop/hbase/io/hfile/slab/TestSlabCache.java PRE-CREATION bq.src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java 4387170 bq. bq. Diff: https://reviews.apache.org/r/1214/diff bq. bq. bq. Testing bq. --- bq. bq. Ran benchmarks against it in HBase standalone mode. Wrote test cases for all classes, multithreaded test cases exist for the cache. bq. bq. bq. Thanks, bq. bq. Li bq. bq. Enable direct byte buffers LruBlockCache Key: HBASE-4027 URL: https://issues.apache.org/jira/browse/HBASE-4027 Project: HBase Issue Type: Improvement Reporter: Jason Rutherglen Assignee: Li Pi Priority: Minor Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.5.diff, hbase-4027-v10.diff, hbase-4027v10.6.diff, hbase-4027v6.diff, slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, slabcachepatchv4.diff Java offers the creation of direct byte buffers which are allocated outside of the heap. They need to be manually free'd, which can be accomplished using an documented {{clean}} method. The feature will be optional. After implementing, we can benchmark for differences in speed and garbage collection observances. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4168) A client continues to try and connect to a powered down regionserver
[ https://issues.apache.org/jira/browse/HBASE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh Todi updated HBASE-4168: Attachment: HBASE-4168(7).patch Updated. However, Ted - it seems that that line's original indentation is off. A client continues to try and connect to a powered down regionserver Key: HBASE-4168 URL: https://issues.apache.org/jira/browse/HBASE-4168 Project: HBase Issue Type: Bug Reporter: Anirudh Todi Assignee: Anirudh Todi Priority: Critical Attachments: HBASE-4168(2).patch, HBASE-4168(3).patch, HBASE-4168(4).patch, HBASE-4168(5).patch, HBASE-4168(6).patch, HBASE-4168(7).patch, HBASE-4168-revised.patch, HBASE-4168.patch, hbase-hadoop-master-msgstore232.snc4.facebook.com.log Experiment-1 Started a dev cluster - META is on the same regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META is able to migrate to a new regionserver and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-2 Started a dev cluster - META is on a different regionserver as my key-value. I kill the regionserver process but donot power down the machine. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-3 Started a dev cluster - META is on a different regionserver as my key-value. I power down the machine hosting this regionserver. The META remains where it is and the regions are also able to reopen elsewhere. The client is able to talk to the META and find the new kv location and get it. Experiment-4 (This is the problematic one) Started a dev cluster - META is on the same regionserver as my key-value. I power down the machine hosting this regionserver. The META is able to migrate to a new regionserver - however - it takes a really long time (~30 minutes) The regions on that regionserver DONOT reopen (I waited for 1 hour) The client is able to find the new location of the META, however, the META keeps redirecting the client to powered down regionserver as the location of the key-value it is trying to get. Thus the client's get is unsuccessful. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4184) CatalogJanitor doesn't work properly when fs.default.name isn't set in config file.
CatalogJanitor doesn't work properly when fs.default.name isn't set in config file. - Key: HBASE-4184 URL: https://issues.apache.org/jira/browse/HBASE-4184 Project: HBase Issue Type: Bug Components: master Reporter: Ming Ma Assignee: Ming Ma In our system, hbase.rootdir is set to a hdfs path and hbase can figure out the FileSystem and set fs.default.name accordingly on the Configuration object and pass around including to RS. That is handled in HMaster.java and MasterFileSystem.java. CatalogJanitor uses deprecated HRegionInfo.getTableDesc. The method creates a default configuration and get FileSystem from there. That will be RawLocalFileSystem. It returns the following exception. java.lang.IllegalArgumentException: Wrong FS: hdfs://sea-esxi-0:54310/tmp/hbase/ testtb/.tableinfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:454) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:67) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:307) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1085) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1110) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:494) at org.apache.hadoop.hbase.util.FSUtils.getTableInfoModtime(FSUtils.java:833) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:127) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:99) at org.apache.hadoop.hbase.HRegionInfo.getTableDesc(HRegionInfo.java:560) at org.apache.hadoop.hbase.master.CatalogJanitor$1.compare(CatalogJanitor.java:118) at org.apache.hadoop.hbase.master.CatalogJanitor$1.compare(CatalogJanitor.java:110) at java.util.TreeMap.put(TreeMap.java:530)at org.apache.hadoop.hbase.master.CatalogJanitor$2.visit(CatalogJanitor.java:138) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4184) CatalogJanitor doesn't work properly when fs.default.name isn't set in config file.
[ https://issues.apache.org/jira/browse/HBASE-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HBASE-4184: --- Attachment: HBASE-4184-trunk.patch Change CatalogJanitor.java to use HRegionInfo.getTableName() instead. Should we actually remove HRegionInfo.getTableDesc()? I know it isn't backward compatible. But the function doesn't seem provide much value. CatalogJanitor doesn't work properly when fs.default.name isn't set in config file. - Key: HBASE-4184 URL: https://issues.apache.org/jira/browse/HBASE-4184 Project: HBase Issue Type: Bug Components: master Reporter: Ming Ma Assignee: Ming Ma Attachments: HBASE-4184-trunk.patch In our system, hbase.rootdir is set to a hdfs path and hbase can figure out the FileSystem and set fs.default.name accordingly on the Configuration object and pass around including to RS. That is handled in HMaster.java and MasterFileSystem.java. CatalogJanitor uses deprecated HRegionInfo.getTableDesc. The method creates a default configuration and get FileSystem from there. That will be RawLocalFileSystem. It returns the following exception. java.lang.IllegalArgumentException: Wrong FS: hdfs://sea-esxi-0:54310/tmp/hbase/ testtb/.tableinfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:454) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:67) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:307) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1085) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1110) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:494) at org.apache.hadoop.hbase.util.FSUtils.getTableInfoModtime(FSUtils.java:833) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:127) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:99) at org.apache.hadoop.hbase.HRegionInfo.getTableDesc(HRegionInfo.java:560) at org.apache.hadoop.hbase.master.CatalogJanitor$1.compare(CatalogJanitor.java:118) at org.apache.hadoop.hbase.master.CatalogJanitor$1.compare(CatalogJanitor.java:110) at java.util.TreeMap.put(TreeMap.java:530)at org.apache.hadoop.hbase.master.CatalogJanitor$2.visit(CatalogJanitor.java:138) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3331) Kill -STOP of RS hosting META does not recover
[ https://issues.apache.org/jira/browse/HBASE-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082090#comment-13082090 ] Ming Ma commented on HBASE-3331: I tested it couple times on trunk. Couldn't repro it. The system will recover in couple minutes. Kill -STOP of RS hosting META does not recover -- Key: HBASE-3331 URL: https://issues.apache.org/jira/browse/HBASE-3331 Project: HBase Issue Type: Bug Affects Versions: 0.90.0 Reporter: Todd Lipcon Priority: Critical Fix For: 0.92.0 Attachments: timeouts.log.txt If you find the server hosting META and kill -STOP its region server, it will eventually lose its ZK session and the master will split its logs and try to reassign. However, at some point along here it tries to access the old META, and gets SocketTimeoutExceptions, which cause it to keep retrying forever. Once I kill -9ed the stopped server, things came back to life. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4181) HConnectionManager can't find cached HRegionInterface which makes client very slow
[ https://issues.apache.org/jira/browse/HBASE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-4181: -- Summary: HConnectionManager can't find cached HRegionInterface which makes client very slow (was: HConnectionManager can't find cached HRegionInterface makes client very slow) HConnectionManager can't find cached HRegionInterface which makes client very slow -- Key: HBASE-4181 URL: https://issues.apache.org/jira/browse/HBASE-4181 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.92.0 Reporter: Liu Jia Assignee: Liu Jia Priority: Critical Labels: HConnectionManager Fix For: 0.92.0 Attachments: HBASE-4181-trunk-v2.patch, HBASE-4181-trunk-v3.patch, HBASE-4181.patch, HConnectionManager.patch HRegionInterface getHRegionConnection(final String hostname, final int port, final InetSocketAddress isa, final boolean master) throws IOException / String rsName = isa != null ? isa.toString() : Addressing .createHostAndPortStr(hostname, port); here,if isa is null, the Addressing created a address like node41:60010 should use isa.toString():new InetSocketAddress(hostname, port).toString(); instead of Addressing.createHostAndPortStr(hostname, port); server = this.servers.get(rsName); if (server == null) { // create a unique lock for this RS (if necessary) this.connectionLock.putIfAbsent(rsName, rsName); // get the RS lock synchronized (this.connectionLock.get(rsName)) { // do one more lookup in case we were stalled above server = this.servers.get(rsName); if (server == null) { try { if (clusterId.hasId()) { conf.set(HConstants.CLUSTER_ID, clusterId.getId()); } // Only create isa when we need to. InetSocketAddress address = isa != null ? isa : new InetSocketAddress(hostname, port); // definitely a cache miss. establish an RPC for this RS server = (HRegionInterface) HBaseRPC.waitForProxy( serverInterfaceClass, HRegionInterface.VERSION, address, this.conf, this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); this.servers.put(address.toString(), server); but here address.toString() send an address like node41/10.61.2l.171:60010 so this method can never get cached address and make client request very slow due to it's synchronized. } catch (RemoteException e) { LOG.warn(RemoteException connecting to RS, e); // Throw what the RemoteException was carrying. throw RemoteExceptionHandler.decodeRemoteException(e); } } } /// -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4184) CatalogJanitor doesn't work properly when fs.default.name isn't set in config file.
[ https://issues.apache.org/jira/browse/HBASE-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082092#comment-13082092 ] Ted Yu commented on HBASE-4184: --- +1 on patch. Andrew argued that HRegionInfo.getTableDesc() should be kept for one major release before being removed. CatalogJanitor doesn't work properly when fs.default.name isn't set in config file. - Key: HBASE-4184 URL: https://issues.apache.org/jira/browse/HBASE-4184 Project: HBase Issue Type: Bug Components: master Reporter: Ming Ma Assignee: Ming Ma Attachments: HBASE-4184-trunk.patch In our system, hbase.rootdir is set to a hdfs path and hbase can figure out the FileSystem and set fs.default.name accordingly on the Configuration object and pass around including to RS. That is handled in HMaster.java and MasterFileSystem.java. CatalogJanitor uses deprecated HRegionInfo.getTableDesc. The method creates a default configuration and get FileSystem from there. That will be RawLocalFileSystem. It returns the following exception. java.lang.IllegalArgumentException: Wrong FS: hdfs://sea-esxi-0:54310/tmp/hbase/ testtb/.tableinfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:454) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:67) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:307) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1085) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1110) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:494) at org.apache.hadoop.hbase.util.FSUtils.getTableInfoModtime(FSUtils.java:833) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:127) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:99) at org.apache.hadoop.hbase.HRegionInfo.getTableDesc(HRegionInfo.java:560) at org.apache.hadoop.hbase.master.CatalogJanitor$1.compare(CatalogJanitor.java:118) at org.apache.hadoop.hbase.master.CatalogJanitor$1.compare(CatalogJanitor.java:110) at java.util.TreeMap.put(TreeMap.java:530)at org.apache.hadoop.hbase.master.CatalogJanitor$2.visit(CatalogJanitor.java:138) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4156) ZKConfig defaults clientPort improperly
[ https://issues.apache.org/jira/browse/HBASE-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082098#comment-13082098 ] Hudson commented on HBASE-4156: --- Integrated in HBase-TRUNK #2103 (See [https://builds.apache.org/job/HBase-TRUNK/2103/]) HBASE-4156 ZKConfig defaults clientPort improperly HBASE-4156 ZKConfig defaults clientPort improperly; mistakenly overcommitted -- reverting HBASE-4156 ZKConfig defaults clientPort improperly stack : Files : * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java stack : Files : * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java * /hbase/trunk/src/docbkx/developer.xml * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/Filter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java * /hbase/trunk/src/docbkx/book.xml stack : Files : * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java * /hbase/trunk/src/docbkx/developer.xml * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/Filter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java *
[jira] [Resolved] (HBASE-3331) Kill -STOP of RS hosting META does not recover
[ https://issues.apache.org/jira/browse/HBASE-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HBASE-3331. Resolution: Cannot Reproduce Thanks for trying to reproduce, Ming. I'll resolve it as Cannot Reproduce. Kill -STOP of RS hosting META does not recover -- Key: HBASE-3331 URL: https://issues.apache.org/jira/browse/HBASE-3331 Project: HBase Issue Type: Bug Affects Versions: 0.90.0 Reporter: Todd Lipcon Priority: Critical Fix For: 0.92.0 Attachments: timeouts.log.txt If you find the server hosting META and kill -STOP its region server, it will eventually lose its ZK session and the master will split its logs and try to reassign. However, at some point along here it tries to access the old META, and gets SocketTimeoutExceptions, which cause it to keep retrying forever. Once I kill -9ed the stopped server, things came back to life. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4176) Exposing HBase Filters to the Thrift API
[ https://issues.apache.org/jira/browse/HBASE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13082113#comment-13082113 ] jirapos...@reviews.apache.org commented on HBASE-4176: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1326/ --- (Updated 2011-08-10 03:01:21.623318) Review request for hbase, Todd Lipcon, Ted Yu, Michael Stack, and Jonathan Gray. Changes --- This update does two important things: 1. It enables the use of this Filter Language from the shell while keeping the existing functionality. Below is some example usage (I can add more documentation to the document I had put up) hbase(main):003:0 scan 't1', {FILTER = KeyOnlyFilter()} ROW COLUMN+CELL realtime column=conf:number, timestamp=1311055899185, value= realtime column=conf:sameplesize, timestamp=1310962821624, value= realtime column=conf:sameplesize, timestamp=1310962794886, value= realtime column=conf:blacklist, timestamp=1310667339990, value= realtime column=conf:daily_thresholds, timestamp=1310667461494, value= hbase(main):002:0 scan 't1', {FILTER = org.apache.hadoop.hbase.filter.KeyOnlyFilter.new()} ROW COLUMN+CELL realtime column=conf:number, timestamp=1311055899185, value= realtime column=conf:sameplesize, timestamp=1310962821624, value= realtime column=conf:sameplesize, timestamp=1310962794886, value= realtime column=conf:blacklist, timestamp=1310667339990, value= realtime column=conf:daily_thresholds, timestamp=1310667461494, value= hbase(main):005:0 scan 't1', {FILTER = (FirstKeyOnlyFilter() AND ValueFilter(=, 'binary:ghi')) OR TimestampsFilter(1311109736514)} ROW COLUMN+CELL realtime column=conf:sameplesize, timestamp=1310970483721, value=nonononono realtime column=conf:sameplesize, timestamp=1310962794886, value=nonononono realtime column=conf:blacklist, timestamp=1310667339990, value=hello row1 column=conf:number, timestamp=1311109736514, value=short_row 2. It reverts the changes to CompareFilter's LESS, LESS_OR_EQUAL etc. Upon testing the actual results I got back (using the Filter Language from the shell) instead of just looking at whether I was parsing and constructing the correct Filter objects - it seems this was incorrect b/c of the way the code is structured. For example: Suppose I am comparing a qualifier abc using a Qualifier Filter with a comparator EQUAL and comparator - abc - then QualifierFilter's filterKeyValue calls doCompare. The compareResult computed in doCompare evaluates to 0 b/c the two qualifiers are the same. It now enters the EQUAL case since the compareOp was EQUASL. Here it checks that the compareResult is NOT_EQUAL to 0 (instead of checking if it EQUAL to 0). Hence doCompare returns false (since they are equal) and as a result in filterKeyValue we DON'T skip that key-value and instead INCLUDE it @Li - could you be more specific with what you mean when you say - preconditions instead? Summary --- https://issues.apache.org/jira/browse/HBASE-4176: Exposing HBase Filters to the Thrift API Currently, to use any of the filters, one has to explicitly add a scanner for the filter in the Thrift API making it messy and long. With this patch, I am trying to add support for all the filters in a clean way. The user specifies a filter via a string. The string is parsed on the server to construct the filter. More information can be found in the attached document named Filter Language This patch is trying to extend and further the progress made by the patches in HBASE-1744 There is document attached to the HBASE-4176 JIRA that describes this patch in further detail This addresses bug HBASE-4176. https://issues.apache.org/jira/browse/HBASE-4176 Diffs (updated) - /src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/Filter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/FilterList.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java 1155563 /src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 1155563