[jira] [Updated] (HBASE-8768) Improve bulk load performance by moving key value construction from map phase to reduce phase.
[ https://issues.apache.org/jira/browse/HBASE-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-8768: -- Status: Patch Available (was: Open) > Improve bulk load performance by moving key value construction from map phase > to reduce phase. > -- > > Key: HBASE-8768 > URL: https://issues.apache.org/jira/browse/HBASE-8768 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Performance >Reporter: rajeshbabu >Assignee: rajeshbabu > Attachments: HBASE-8768_v2.patch, HBASE-8768_v3.patch, > HBase_Bulkload_Performance_Improvement.pdf > > > ImportTSV bulkloading approach uses MapReduce framework. Existing mapper and > reducer classes used by ImportTSV are TsvImporterMapper.java and > PutSortReducer.java. ImportTSV tool parses the tab(by default) seperated > values from the input files and Mapper class generates the PUT objects for > each row using the Key value pairs created from the parsed text. > PutSortReducer then uses the partions based on the regions and sorts the Put > objects for each region. > Overheads we can see in the above approach: > == > 1) keyvalue construction for each parsed value in the line adding extra data > like rowkey,columnfamily,qualifier which will increase around 5x extra data > to be shuffled in reduce phase. > We can calculate data size to shuffled as below > {code} > Data to be shuffled = nl*nt*(rl+cfl+cql+vall+tsl+30) > {code} > If we move keyvalue construction to reduce phase we datasize to be shuffle > will be which is very less compared to above. > {code} > Data to be shuffled = nl*nt*vall > {code} > nl - Number of lines in the raw file > nt - Number of tabs or columns including row key. > rl - row length which will be different for each line. > cfl - column family length which will be different for each family > cql - qualifier length > tsl - timestamp length. > vall- each parsed value length. > 30 bytes for kv size,number of families etc. > 2) In mapper side we are creating put objects by adding all keyvalues > constructed for each line and in reducer we will again collect keyvalues from > put and sort them. > Instead we can directly create and sort keyvalues in reducer. > Solution: > > We can improve bulk load performance by moving the key value construction > from mapper to reducer so that Mapper just sends the raw text for each row to > the Reducer. Reducer then parses the records for rows and create and sort the > key value pairs before writing to HFiles. > Conclusion: > === > The above suggestions will improve map phase performance by avoiding keyvalue > construction and reduce phase performance by avoiding excess data to be > shuffled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8768) Improve bulk load performance by moving key value construction from map phase to reduce phase.
[ https://issues.apache.org/jira/browse/HBASE-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-8768: -- Attachment: HBASE-8768_v3.patch Patch Addressing Anoop's comments. > Improve bulk load performance by moving key value construction from map phase > to reduce phase. > -- > > Key: HBASE-8768 > URL: https://issues.apache.org/jira/browse/HBASE-8768 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Performance >Reporter: rajeshbabu >Assignee: rajeshbabu > Attachments: HBASE-8768_v2.patch, HBASE-8768_v3.patch, > HBase_Bulkload_Performance_Improvement.pdf > > > ImportTSV bulkloading approach uses MapReduce framework. Existing mapper and > reducer classes used by ImportTSV are TsvImporterMapper.java and > PutSortReducer.java. ImportTSV tool parses the tab(by default) seperated > values from the input files and Mapper class generates the PUT objects for > each row using the Key value pairs created from the parsed text. > PutSortReducer then uses the partions based on the regions and sorts the Put > objects for each region. > Overheads we can see in the above approach: > == > 1) keyvalue construction for each parsed value in the line adding extra data > like rowkey,columnfamily,qualifier which will increase around 5x extra data > to be shuffled in reduce phase. > We can calculate data size to shuffled as below > {code} > Data to be shuffled = nl*nt*(rl+cfl+cql+vall+tsl+30) > {code} > If we move keyvalue construction to reduce phase we datasize to be shuffle > will be which is very less compared to above. > {code} > Data to be shuffled = nl*nt*vall > {code} > nl - Number of lines in the raw file > nt - Number of tabs or columns including row key. > rl - row length which will be different for each line. > cfl - column family length which will be different for each family > cql - qualifier length > tsl - timestamp length. > vall- each parsed value length. > 30 bytes for kv size,number of families etc. > 2) In mapper side we are creating put objects by adding all keyvalues > constructed for each line and in reducer we will again collect keyvalues from > put and sort them. > Instead we can directly create and sort keyvalues in reducer. > Solution: > > We can improve bulk load performance by moving the key value construction > from mapper to reducer so that Mapper just sends the raw text for each row to > the Reducer. Reducer then parses the records for rows and create and sort the > key value pairs before writing to HFiles. > Conclusion: > === > The above suggestions will improve map phase performance by avoiding keyvalue > construction and reduce phase performance by avoiding excess data to be > shuffled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8768) Improve bulk load performance by moving key value construction from map phase to reduce phase.
[ https://issues.apache.org/jira/browse/HBASE-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-8768: -- Status: Open (was: Patch Available) > Improve bulk load performance by moving key value construction from map phase > to reduce phase. > -- > > Key: HBASE-8768 > URL: https://issues.apache.org/jira/browse/HBASE-8768 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Performance >Reporter: rajeshbabu >Assignee: rajeshbabu > Attachments: HBASE-8768_v2.patch, > HBase_Bulkload_Performance_Improvement.pdf > > > ImportTSV bulkloading approach uses MapReduce framework. Existing mapper and > reducer classes used by ImportTSV are TsvImporterMapper.java and > PutSortReducer.java. ImportTSV tool parses the tab(by default) seperated > values from the input files and Mapper class generates the PUT objects for > each row using the Key value pairs created from the parsed text. > PutSortReducer then uses the partions based on the regions and sorts the Put > objects for each region. > Overheads we can see in the above approach: > == > 1) keyvalue construction for each parsed value in the line adding extra data > like rowkey,columnfamily,qualifier which will increase around 5x extra data > to be shuffled in reduce phase. > We can calculate data size to shuffled as below > {code} > Data to be shuffled = nl*nt*(rl+cfl+cql+vall+tsl+30) > {code} > If we move keyvalue construction to reduce phase we datasize to be shuffle > will be which is very less compared to above. > {code} > Data to be shuffled = nl*nt*vall > {code} > nl - Number of lines in the raw file > nt - Number of tabs or columns including row key. > rl - row length which will be different for each line. > cfl - column family length which will be different for each family > cql - qualifier length > tsl - timestamp length. > vall- each parsed value length. > 30 bytes for kv size,number of families etc. > 2) In mapper side we are creating put objects by adding all keyvalues > constructed for each line and in reducer we will again collect keyvalues from > put and sort them. > Instead we can directly create and sort keyvalues in reducer. > Solution: > > We can improve bulk load performance by moving the key value construction > from mapper to reducer so that Mapper just sends the raw text for each row to > the Reducer. Reducer then parses the records for rows and create and sort the > key value pairs before writing to HFiles. > Conclusion: > === > The above suggestions will improve map phase performance by avoiding keyvalue > construction and reduce phase performance by avoiding excess data to be > shuffled. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8874) PutCombiner is skipping KeyValues while combining puts of same row during bulkload
[ https://issues.apache.org/jira/browse/HBASE-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-8874: -- Attachment: HBASE-8874_trunk_3.patch Patch addressing chunhui's comments. Thanks Ted and chunhui for review. > PutCombiner is skipping KeyValues while combining puts of same row during > bulkload > -- > > Key: HBASE-8874 > URL: https://issues.apache.org/jira/browse/HBASE-8874 > Project: HBase > Issue Type: Bug > Components: mapreduce >Affects Versions: 0.95.0, 0.95.1 >Reporter: rajeshbabu >Assignee: rajeshbabu >Priority: Critical > Fix For: 0.98.0, 0.95.2 > > Attachments: HBASE-8874_trunk_2.patch, HBASE-8874_trunk_3.patch, > HBASE-8874_trunk.patch > > > While combining puts of same row in map phase we are using below logic in > PutCombiner#reduce. In for loop first time we will add one Put object to puts > map. Next time onwards we are just overriding key values of a family with key > values of the same family in other put. So we are mostly writing one Put > object to map output and remaining will be skipped(data loss). > {code} > Map puts = new TreeMap(Bytes.BYTES_COMPARATOR); > for (Put p : vals) { > cnt++; > if (!puts.containsKey(p.getRow())) { > puts.put(p.getRow(), p); > } else { > puts.get(p.getRow()).getFamilyMap().putAll(p.getFamilyMap()); > } > } > {code} > We need to change logic similar as below because we are sure the rowkey of > all the puts will be same. > {code} > Put finalPut = null; > Map> familyMap = null; > for (Put p : vals) { > cnt++; > if (finalPut==null) { > finalPut = p; > familyMap = finalPut.getFamilyMap(); > } else { > for (Entry> entry : > p.getFamilyMap().entrySet()) { > List list = familyMap.get(entry.getKey()); > if (list == null) { > familyMap.put(entry.getKey(), entry.getValue()); > } else { > (((List)list)).addAll((List)entry.getValue()); > } > } > } > } > context.write(row, finalPut); > {code} > Also need to implement TODOs mentioned by Nick > {code} > // TODO: would be better if we knew K row and Put rowkey were > // identical. Then this whole Put buffering business goes away. > // TODO: Could use HeapSize to create an upper bound on the memory size of > // the puts map and flush some portion of the content while looping. This > // flush could result in multiple Puts for a single rowkey. That is > // acceptable because Combiner is run as an optimization and it's not > // critical that all Puts are grouped perfectly. > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7634) Replication handling of changes to peer clusters is inefficient
[ https://issues.apache.org/jira/browse/HBASE-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722190#comment-13722190 ] Gabriel Reid commented on HBASE-7634: - Yes, will rebase this in the course of the next couple of days -- sorry for the delay on this. > Replication handling of changes to peer clusters is inefficient > --- > > Key: HBASE-7634 > URL: https://issues.apache.org/jira/browse/HBASE-7634 > Project: HBase > Issue Type: Bug > Components: Replication >Affects Versions: 0.95.2 >Reporter: Gabriel Reid > Attachments: HBASE-7634.patch, HBASE-7634.v2.patch, > HBASE-7634.v3.patch > > > The current handling of changes to the region servers in a replication peer > cluster is currently quite inefficient. The list of region servers that are > being replicated to is only updated if there are a large number of issues > encountered while replicating. > This can cause it to take quite a while to recognize that a number of the > regionserver in a peer cluster are no longer available. A potentially bigger > problem is that if a replication peer cluster is started with a small number > of regionservers, and then more region servers are added after replication > has started, the additional region servers will never be used for replication > (unless there are failures on the in-use regionservers). > Part of the current issue is that the retry code in > ReplicationSource#shipEdits checks a randomly-chosen replication peer > regionserver (in ReplicationSource#isSlaveDown) to see if it is up after a > replication write has failed on a different randonly-chosen replication peer. > If the peer is seen as not down, another randomly-chosen peer is used for > writing. > A second part of the issue is that changes to the list of region servers in a > peer cluster are not detected at all, and are only picked up if a certain > number of failures have occurred when trying to ship edits. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8846) Revert the package name change for TableExistsException
[ https://issues.apache.org/jira/browse/HBASE-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722173#comment-13722173 ] Hadoop QA commented on HBASE-8846: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594630/8846-2.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 178 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.master.TestHMasterRPCException org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6506//console This message is automatically generated. > Revert the package name change for TableExistsException > --- > > Key: HBASE-8846 > URL: https://issues.apache.org/jira/browse/HBASE-8846 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.0 >Reporter: Devaraj Das >Assignee: Devaraj Das > Fix For: 0.95.2 > > Attachments: 8846-1.txt, 8846-2.txt > > > I was going through the code changes that were needed for getting an > application that was running with hbase-0.92 run with hbase-0.95. > TableExistsException's package has changed - hence, needs a code change in > the application. Offline discussion with some folks led us to believe that > this change can probably be reverted back. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8663) a HBase Shell command to list the tables replicated (from or to) current cluster
[ https://issues.apache.org/jira/browse/HBASE-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722159#comment-13722159 ] Hadoop QA commented on HBASE-8663: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594612/HBASE-8663-trunk-v0.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 2 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6505//console This message is automatically generated. > a HBase Shell command to list the tables replicated (from or to) current > cluster > > > Key: HBASE-8663 > URL: https://issues.apache.org/jira/browse/HBASE-8663 > Project: HBase > Issue Type: New Feature > Components: Replication, shell > Environment: clusters setup as Master and Slave for replication of > tables >Reporter: Demai Ni >Assignee: Demai Ni >Priority: Critical > Attachments: HBASE-8663.PATCH, HBASE-8663-trunk-v0.patch, > HBASE-8663-v2.PATCH > > > This jira is to provide a hbase shell command which can give user can > overview of the tables/columnfamilies currently being replicated. The > information will help system administrator for design and planning, and also > help application programmer to know which tables/columns should be > watchout(for example, not to modify a replicated columnfamily on the slave > cluster) > Currently there is no easy way to tell which table(s)/columnfamily(ies) > replicated from or to a particular cluster. > > On Master Cluster, an indirect method can be used by combining two steps: 1) > $describe 'usertable' and 2) $list_peers to map the REPLICATION_SCOPE to > target(aka slave) cluster > > On slave cluster, this is no existing API/methods to list all the tables > replicated to this cluster. > Here is an example, and prototype for Master cluster > {code: title=hbase shell command:list_replicated_tables |borderStyle=solid} > hbase(main):001:0> list_replicated_tables > TABLE COLUMNFAMILY TARGET_CLUSTER > scores coursehdtest017.svl.ibm.com:2181:/hbase > t3_dn cf1 hdtest017.svl.ibm.com:2181:/hbase > usertable family
[jira] [Commented] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722154#comment-13722154 ] Lars Hofhansl commented on HBASE-9032: -- Sorry, -0 from me. # This ties the caller to the internal workings of the Result object # It cements this API even further for other user just starting with 0.94. This method should have never been public. # As you point out, this API is going away in 0.95 # There might be code out there relying on the current behavior (maybe for detecting whether a Result object was deserialized from RPC) Now, I am not blocking it. It won't do much harm I think. But I would prefer if somebody else commits it. [~stack]? > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch, > HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8846) Revert the package name change for TableExistsException
[ https://issues.apache.org/jira/browse/HBASE-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-8846: --- Attachment: 8846-2.txt Thanks for pointing out the compilation issue, Ted. Attached is the right patch (had forgotten to 'git add' a directory before generating the patch earlier). > Revert the package name change for TableExistsException > --- > > Key: HBASE-8846 > URL: https://issues.apache.org/jira/browse/HBASE-8846 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.0 >Reporter: Devaraj Das >Assignee: Devaraj Das > Fix For: 0.95.2 > > Attachments: 8846-1.txt, 8846-2.txt > > > I was going through the code changes that were needed for getting an > application that was running with hbase-0.92 run with hbase-0.95. > TableExistsException's package has changed - hence, needs a code change in > the application. Offline discussion with some folks led us to believe that > this change can probably be reverted back. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7671) Flushing memstore again after last failure could cause data loss
[ https://issues.apache.org/jira/browse/HBASE-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722148#comment-13722148 ] Liang Xie commented on HBASE-7671: -- After a discuss with my colleague fenghh, we realize that there's still a data loss issue after HBASE-7671 in. e.g. just after the flush failure happened, a HbaseAdmin.flush do internalFlushcache, due to "this.rsServices != null && this.rsServices.isAborted()" is not be set immediately in current thread, so this flush op could be succesful and cause data loss. We fixes it with setting the *closing* flag explicitly before throw exception, and added this flag checking in "if (!writestate.flushing && writestate.writesEnabled)" statement. > Flushing memstore again after last failure could cause data loss > > > Key: HBASE-7671 > URL: https://issues.apache.org/jira/browse/HBASE-7671 > Project: HBase > Issue Type: Bug >Affects Versions: 0.94.6, 0.95.0 >Reporter: chunhui shen >Assignee: chunhui shen > Fix For: 0.94.6, 0.95.0 > > Attachments: 7671-94.patch, HBASE-7671.patch, HBASE-7671v2.patch, > HBASE-7671v3.patch, HBASE-7671v4.patch, HBASE-7671v5.patch > > > See the following logs first: > {code} > 2013-01-23 18:58:38,801 INFO org.apache.hadoop.hbase.regionserver.Store: > Flushed , sequenceid=9746535080, memsize=101.8m, into tmp file > hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/bebeeecc56364b6c8126cf1dc6782a25 > 2013-01-23 18:58:41,982 WARN org.apache.hadoop.hbase.regionserver.MemStore: > Snapshot called again without clearing previous. Doing nothing. Another > ongoing flush or did we fail last attempt? > 2013-01-23 18:58:43,274 INFO org.apache.hadoop.hbase.regionserver.Store: > Flushed , sequenceid=9746599334, memsize=101.8m, into tmp file > hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/4eede32dc469480bb3d469aaff332313 > {code} > The first time memstore flush is failed when commitFile()(Logged the first > edit above), then trigger server abort, but another flush is coming > immediately(could caused by move/split,Logged the third edit above) and > successful. > For the same memstore's snapshot, we get different sequenceid, it causes data > loss when replaying log edits > See details from the unit test case in the patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8663) a HBase Shell command to list the tables replicated (from or to) current cluster
[ https://issues.apache.org/jira/browse/HBASE-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-8663: -- Status: Patch Available (was: Open) > a HBase Shell command to list the tables replicated (from or to) current > cluster > > > Key: HBASE-8663 > URL: https://issues.apache.org/jira/browse/HBASE-8663 > Project: HBase > Issue Type: New Feature > Components: Replication, shell > Environment: clusters setup as Master and Slave for replication of > tables >Reporter: Demai Ni >Assignee: Demai Ni >Priority: Critical > Attachments: HBASE-8663.PATCH, HBASE-8663-trunk-v0.patch, > HBASE-8663-v2.PATCH > > > This jira is to provide a hbase shell command which can give user can > overview of the tables/columnfamilies currently being replicated. The > information will help system administrator for design and planning, and also > help application programmer to know which tables/columns should be > watchout(for example, not to modify a replicated columnfamily on the slave > cluster) > Currently there is no easy way to tell which table(s)/columnfamily(ies) > replicated from or to a particular cluster. > > On Master Cluster, an indirect method can be used by combining two steps: 1) > $describe 'usertable' and 2) $list_peers to map the REPLICATION_SCOPE to > target(aka slave) cluster > > On slave cluster, this is no existing API/methods to list all the tables > replicated to this cluster. > Here is an example, and prototype for Master cluster > {code: title=hbase shell command:list_replicated_tables |borderStyle=solid} > hbase(main):001:0> list_replicated_tables > TABLE COLUMNFAMILY TARGET_CLUSTER > scores coursehdtest017.svl.ibm.com:2181:/hbase > t3_dn cf1 hdtest017.svl.ibm.com:2181:/hbase > usertable familyhdtest017.svl.ibm.com:2181:/hbase > 3 row(s) in 0.3380 seconds > {code} > {code: title=method to return all columnfamilies replicated from this cluster > |borderStyle=solid} > /** > * ReplicationAdmin.listRepllicated >* @return List of the replicated columnfamilies of this cluster for > display. >* @throws IOException > */ > public List listReplicated() throws IOException { > List replicatedColFams = new ArrayList(); > > HTableDescriptor[] tables; > > tables= this.connection.listTables(); > > Map peers = listPeers(); > > for (HTableDescriptor table:tables) { > HColumnDescriptor[] columns = table.getColumnFamilies(); > String tableName = table.getNameAsString(); > for (HColumnDescriptor column: columns) { > int scope = column.getScope(); > > if (scope!=0) { > String[] replicatedEntry = new String[3]; > replicatedEntry[0] = tableName; > replicatedEntry[1] = column.getNameAsString(); > replicatedEntry[2] = peers.get(Integer.toString(scope)); > replicatedColFams.add(replicatedEntry); > } > } > } > > return replicatedColFams; > } > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9070) Properly clean up snapshots in tearDown() method of snapshot related tests
Ted Yu created HBASE-9070: - Summary: Properly clean up snapshots in tearDown() method of snapshot related tests Key: HBASE-9070 URL: https://issues.apache.org/jira/browse/HBASE-9070 Project: HBase Issue Type: Test Reporter: Ted Yu Assignee: Ted Yu During code review of HBASE-9058, it was found that some snapshot related tests remove the snapshot directory in tearDown() method. This is not the proper way of clean up. We should iterate through existing snapshots and delete each of them. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9055) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9055: -- Fix Version/s: 0.98.0 Hadoop Flags: Reviewed Integrated to trunk. Thanks for the review, Chunhui. > HBaseAdmin#isTableEnabled() should return false for non-existent table > --- > > Key: HBASE-9055 > URL: https://issues.apache.org/jira/browse/HBASE-9055 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.1 >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.98.0 > > Attachments: 9055-v1.txt, 9055-v2.txt, 9055-v3.txt > > > Currently HBaseAdmin#isTableEnabled() returns true for a table which doesn't > exist. > We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8940) TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race in opening region
[ https://issues.apache.org/jira/browse/HBASE-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chunhui shen updated HBASE-8940: Attachment: 8940-addendum.patch Upping the timeouts from 30s to 120s in addendum patch. I will commit it if no object > TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race > in opening region > - > > Key: HBASE-8940 > URL: https://issues.apache.org/jira/browse/HBASE-8940 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: chunhui shen > Fix For: 0.95.2 > > Attachments: 8940-addendum.patch, 8940-trunk-v2.patch, 8940-v1.txt, > 8940v3.txt > > > From > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/org.apache.hbase$hbase-server/395/testReport/org.apache.hadoop.hbase.regionserver/TestRegionMergeTransactionOnCluster/testWholesomeMerge/ > : > {code} > 013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.RegionStates(309): Offlined 3ffefd878a234031675de6b2c70b2ead from > ip-10-174-118-204.us-west-1.compute.internal,60498,1373535184820 > 2013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.AssignmentManager$4(1223): The master has opened > testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead. > that was online on > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > zookeeper.ZKAssign(862): regionserver:59210-0x13fcd13a20c0002 Successfully > transitioned node 3ffefd878a234031675de6b2c70b2ead from RS_ZK_REGION_OPENING > to RS_ZK_REGION_OPENED > 2013-07-11 09:33:44,182 INFO > [MASTER_TABLE_OPERATIONS-ip-10-174-118-204:39405-0] > handler.DispatchMergingRegionHandler(154): Failed send MERGE REGIONS RPC to > server ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 for > region > testWholesomeMerge,,1373535210124.efcb10dcfa250e31bfd50dc6c7049f32.,testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead., > focible=false, org.apache.hadoop.hbase.exceptions.RegionOpeningException: > Region is being opened: 3ffefd878a234031675de6b2c70b2ead > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2566) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3862) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.mergeRegions(HRegionServer.java:3649) > at > org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14400) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2124) > at > org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1831) > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: > {ENCODED => 3ffefd878a234031675de6b2c70b2ead, NAME => > 'testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead.', > STARTKEY => 'testRow0020', ENDKEY => 'testRow0040'}, server: > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11 09:33:44,183 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > handler.OpenRegionHandler(186): Opened > testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead. > on server:ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > {code} > We can see that MASTER_TABLE_OPERATIONS thread couldn't get region > 3ffefd878a234031675de6b2c70b2ead because RS_OPEN_REGION thread finished > region opening 1 millisecond later. > One solution is to retry operation when receiving RegionOpeningException -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9055) HBaseAdmin#isTableEnabled() should return false for non-existent table
[ https://issues.apache.org/jira/browse/HBASE-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9055: -- Summary: HBaseAdmin#isTableEnabled() should return false for non-existent table (was: HBaseAdmin#isTableEnabled() should check table existence) > HBaseAdmin#isTableEnabled() should return false for non-existent table > --- > > Key: HBASE-9055 > URL: https://issues.apache.org/jira/browse/HBASE-9055 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.1 >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9055-v1.txt, 9055-v2.txt, 9055-v3.txt > > > Currently HBaseAdmin#isTableEnabled() returns true for a table which doesn't > exist. > We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9055) HBaseAdmin#isTableEnabled() should check table existence
[ https://issues.apache.org/jira/browse/HBASE-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722103#comment-13722103 ] chunhui shen commented on HBASE-9055: - +1 on patch v3 > HBaseAdmin#isTableEnabled() should check table existence > > > Key: HBASE-9055 > URL: https://issues.apache.org/jira/browse/HBASE-9055 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.1 >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9055-v1.txt, 9055-v2.txt, 9055-v3.txt > > > Currently HBaseAdmin#isTableEnabled() returns true for a table which doesn't > exist. > We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722101#comment-13722101 ] Jean-Marc Spaggiari commented on HBASE-9032: Excellent. Thanks [~adityakishore]! I'm +1. You have +1 from Stack, so I guess you just need another +1 from another commiter to get it pushed. [~lhofhansl], good for you? > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch, > HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aditya Kishore updated HBASE-9032: -- Attachment: HBASE-9032.patch Sure. Patch modified and attached. > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch, > HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8755) A new write thread model for HLog to improve the overall HBase write throughput
[ https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722098#comment-13722098 ] Jean-Marc Spaggiari commented on HBASE-8755: Hi [~fenghh], I ran in pseudo-distributed for all the tests, which mean HBase again a real HDFS. Tests with 1.1.2 and 1.2.0. The test was a 10 million randomWrite test. Ran 10 of them. I will re-run some tests again a single node (still in the process to install the OS on the other one) and will but more load against it. Like you said, 200 threads doing 100 000 writes eaches... More to come... > A new write thread model for HLog to improve the overall HBase write > throughput > --- > > Key: HBASE-8755 > URL: https://issues.apache.org/jira/browse/HBASE-8755 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Feng Honghua > Attachments: HBASE-8755-0.94-V0.patch, HBASE-8755-0.94-V1.patch, > HBASE-8755-trunk-V0.patch, HBASE-8755-trunk-V1.patch > > > In current write model, each write handler thread (executing put()) will > individually go through a full 'append (hlog local buffer) => HLog writer > append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write, > which incurs heavy race condition on updateLock and flushLock. > The only optimization where checking if current syncTillHere > txid in > expectation for other thread help write/sync its own txid to hdfs and > omitting the write/sync actually help much less than expectation. > Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi > proposed a new write thread model for writing hdfs sequence file and the > prototype implementation shows a 4X improvement for throughput (from 17000 to > 7+). > I apply this new write thread model in HLog and the performance test in our > test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1 > RS, from 22000 to 7 for 5 RS), the 1 RS write throughput (1K row-size) > even beats the one of BigTable (Precolator published in 2011 says Bigtable's > write throughput then is 31002). I can provide the detailed performance test > results if anyone is interested. > The change for new write thread model is as below: > 1> All put handler threads append the edits to HLog's local pending buffer; > (it notifies AsyncWriter thread that there is new edits in local buffer) > 2> All put handler threads wait in HLog.syncer() function for underlying > threads to finish the sync that contains its txid; > 3> An single AsyncWriter thread is responsible for retrieve all the buffered > edits in HLog's local pending buffer and write to the hdfs > (hlog.writer.append); (it notifies AsyncFlusher thread that there is new > writes to hdfs that needs a sync) > 4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs > to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread > that sync watermark increases) > 5> An single AsyncNotifier thread is responsible for notifying all pending > put handler threads which are waiting in the HLog.syncer() function > 6> No LogSyncer thread any more (since there is always > AsyncWriter/AsyncFlusher threads do the same job it does) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9050) HBaseClient#call could hang
[ https://issues.apache.org/jira/browse/HBASE-9050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9050: --- Resolution: Fixed Fix Version/s: 0.94.11 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Integrated into 0.94. Thanks. > HBaseClient#call could hang > --- > > Key: HBASE-9050 > URL: https://issues.apache.org/jira/browse/HBASE-9050 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.10 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Fix For: 0.94.11 > > Attachments: 0.94-9050.patch > > > In HBaseClient#call, we have > {code} > connection.sendParam(call); // send the parameter > boolean interrupted = false; > //noinspection SynchronizationOnLocalVariableOrMethodParameter > synchronized (call) { > while (!call.done) { > try { > call.wait(); // wait for the result > {code} > sendParam could do nothing if the connection is closed right after the call > is added into the queue. Since the connection is closed, we won't get any > response, therefore, we won't get any notify call. So we will keep waiting > here for something won't happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9031) ImmutableBytesWritable.toString() should downcast the bytes before converting to hex string
[ https://issues.apache.org/jira/browse/HBASE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722081#comment-13722081 ] Hadoop QA commented on HBASE-9031: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594611/HBASE-9031.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6504//console This message is automatically generated. > ImmutableBytesWritable.toString() should downcast the bytes before converting > to hex string > --- > > Key: HBASE-9031 > URL: https://issues.apache.org/jira/browse/HBASE-9031 > Project: HBase > Issue Type: Bug > Components: io >Affects Versions: 0.95.1, 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.95.2 > > Attachments: HBASE-9031.patch, HBASE-9031.patch, HBASE-9031.patch > > > The attached patch addresses few issues. > # We need only (3*this.length) capacity in ByteBuffer and not > (3*this.bytes.length). > # Do not calculate (offset + length) at every iteration. > # No test is required at every iteration to add space (' ') before every byte > other than the first one. Uses {{sb.substring(1)}} instead. > # Finally and most importantly (the original issue of this report), downcast > the promoted int (the parameter to {{Integer.toHexString()}}) to byte range. > Without #4, the byte array \{54,125,64, -1, -45\} is transformed to "36 7d 40 > ffd3" instead of "36 7d 40 ff d3". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6143) Make region assignment smarter when regions are re-enabled.
[ https://issues.apache.org/jira/browse/HBASE-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722074#comment-13722074 ] Lars Hofhansl commented on HBASE-6143: -- In 0.94 I still think it would be best to just do the same that AssignmentManager.joinCluster() does, namely eventually calling AssignmentManager.assign(Map regions). If there's a agreement around that I'll come up with a patch. > Make region assignment smarter when regions are re-enabled. > --- > > Key: HBASE-6143 > URL: https://issues.apache.org/jira/browse/HBASE-6143 > Project: HBase > Issue Type: Improvement >Reporter: Elliott Clark >Assignee: Ted Yu >Priority: Critical > Fix For: 0.95.2 > > Attachments: 6143-v1.txt, 6143-v2.txt, 6143-v3.txt, HBASE-6143-0.patch > > > Right now a random region server is picked when re-enabling a table. This > could be much smarter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722071#comment-13722071 ] Jean-Marc Spaggiari commented on HBASE-9032: Ok I see. There can be 2 cases here. 1) r1.getBytes() can be null (before your patch). 2) r1.getBytes() can be different than expected. If r1.getBytes() send null, Result() constructor will still work but the comparison will fail. If r1.getBytes() works but send a differenlt result that expected, the comparison will faile too. So you don't really have a differentiation between the 2 cases. Adding Assert.assertNotNull(r1.getBytes()) will allow to differentiate that. In both cases, test will fail, but that will help to know what failed in it. So, forget what I said about " instead of Result r2 = new", it should have been " in addition of Result.compareResults(r1, r2);" (just before). Make sense? > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
[ https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722072#comment-13722072 ] stack commented on HBASE-8960: -- [~jeffreyz] Thanks. > TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes > -- > > Key: HBASE-8960 > URL: https://issues.apache.org/jira/browse/HBASE-8960 > Project: HBase > Issue Type: Task > Components: test >Reporter: Jimmy Xiang >Assignee: Jeffrey Zhong >Priority: Minor > Fix For: 0.95.2 > > Attachments: hbase-8960.patch > > > http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/ > {noformat} > java.lang.AssertionError: expected:<1000> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8940) TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race in opening region
[ https://issues.apache.org/jira/browse/HBASE-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722070#comment-13722070 ] stack commented on HBASE-8940: -- [~zjushch] That the build box is loaded should be a given. Would you suggest upping the timeouts on these tests? I can do it no problem. Just say (thanks for digging in). > TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race > in opening region > - > > Key: HBASE-8940 > URL: https://issues.apache.org/jira/browse/HBASE-8940 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: chunhui shen > Fix For: 0.95.2 > > Attachments: 8940-trunk-v2.patch, 8940-v1.txt, 8940v3.txt > > > From > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/org.apache.hbase$hbase-server/395/testReport/org.apache.hadoop.hbase.regionserver/TestRegionMergeTransactionOnCluster/testWholesomeMerge/ > : > {code} > 013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.RegionStates(309): Offlined 3ffefd878a234031675de6b2c70b2ead from > ip-10-174-118-204.us-west-1.compute.internal,60498,1373535184820 > 2013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.AssignmentManager$4(1223): The master has opened > testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead. > that was online on > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > zookeeper.ZKAssign(862): regionserver:59210-0x13fcd13a20c0002 Successfully > transitioned node 3ffefd878a234031675de6b2c70b2ead from RS_ZK_REGION_OPENING > to RS_ZK_REGION_OPENED > 2013-07-11 09:33:44,182 INFO > [MASTER_TABLE_OPERATIONS-ip-10-174-118-204:39405-0] > handler.DispatchMergingRegionHandler(154): Failed send MERGE REGIONS RPC to > server ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 for > region > testWholesomeMerge,,1373535210124.efcb10dcfa250e31bfd50dc6c7049f32.,testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead., > focible=false, org.apache.hadoop.hbase.exceptions.RegionOpeningException: > Region is being opened: 3ffefd878a234031675de6b2c70b2ead > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2566) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3862) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.mergeRegions(HRegionServer.java:3649) > at > org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14400) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2124) > at > org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1831) > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: > {ENCODED => 3ffefd878a234031675de6b2c70b2ead, NAME => > 'testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead.', > STARTKEY => 'testRow0020', ENDKEY => 'testRow0040'}, server: > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11 09:33:44,183 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > handler.OpenRegionHandler(186): Opened > testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead. > on server:ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > {code} > We can see that MASTER_TABLE_OPERATIONS thread couldn't get region > 3ffefd878a234031675de6b2c70b2ead because RS_OPEN_REGION thread finished > region opening 1 millisecond later. > One solution is to retry operation when receiving RegionOpeningException -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6143) Make region assignment smarter when regions are re-enabled.
[ https://issues.apache.org/jira/browse/HBASE-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-6143: - Fix Version/s: 0.95.2 > Make region assignment smarter when regions are re-enabled. > --- > > Key: HBASE-6143 > URL: https://issues.apache.org/jira/browse/HBASE-6143 > Project: HBase > Issue Type: Improvement >Reporter: Elliott Clark >Assignee: Ted Yu >Priority: Critical > Fix For: 0.95.2 > > Attachments: 6143-v1.txt, 6143-v2.txt, 6143-v3.txt, HBASE-6143-0.patch > > > Right now a random region server is picked when re-enabling a table. This > could be much smarter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9031) ImmutableBytesWritable.toString() should downcast the bytes before converting to hex string
[ https://issues.apache.org/jira/browse/HBASE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9031: - Attachment: HBASE-9031.patch Sending it by hadoopqa again > ImmutableBytesWritable.toString() should downcast the bytes before converting > to hex string > --- > > Key: HBASE-9031 > URL: https://issues.apache.org/jira/browse/HBASE-9031 > Project: HBase > Issue Type: Bug > Components: io >Affects Versions: 0.95.1, 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.95.2 > > Attachments: HBASE-9031.patch, HBASE-9031.patch, HBASE-9031.patch > > > The attached patch addresses few issues. > # We need only (3*this.length) capacity in ByteBuffer and not > (3*this.bytes.length). > # Do not calculate (offset + length) at every iteration. > # No test is required at every iteration to add space (' ') before every byte > other than the first one. Uses {{sb.substring(1)}} instead. > # Finally and most importantly (the original issue of this report), downcast > the promoted int (the parameter to {{Integer.toHexString()}}) to byte range. > Without #4, the byte array \{54,125,64, -1, -45\} is transformed to "36 7d 40 > ffd3" instead of "36 7d 40 ff d3". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8663) a HBase Shell command to list the tables replicated (from or to) current cluster
[ https://issues.apache.org/jira/browse/HBASE-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Demai Ni updated HBASE-8663: Attachment: HBASE-8663-trunk-v0.patch Upload a patch for trunk for a hadoopQA run. there are one more improvement to be consider : check the value before setReplicationMaster(). Idea is to use the input string, which is supposed to be ZK Quorum to connect the Master cluster, then list_peers from Master, finally check whether the peers contact the slave cluster. Kindly of straightforward if do it from client, I haven't figured out how to do it through HColumnDescripter.java yet. > a HBase Shell command to list the tables replicated (from or to) current > cluster > > > Key: HBASE-8663 > URL: https://issues.apache.org/jira/browse/HBASE-8663 > Project: HBase > Issue Type: New Feature > Components: Replication, shell > Environment: clusters setup as Master and Slave for replication of > tables >Reporter: Demai Ni >Assignee: Demai Ni >Priority: Critical > Attachments: HBASE-8663.PATCH, HBASE-8663-trunk-v0.patch, > HBASE-8663-v2.PATCH > > > This jira is to provide a hbase shell command which can give user can > overview of the tables/columnfamilies currently being replicated. The > information will help system administrator for design and planning, and also > help application programmer to know which tables/columns should be > watchout(for example, not to modify a replicated columnfamily on the slave > cluster) > Currently there is no easy way to tell which table(s)/columnfamily(ies) > replicated from or to a particular cluster. > > On Master Cluster, an indirect method can be used by combining two steps: 1) > $describe 'usertable' and 2) $list_peers to map the REPLICATION_SCOPE to > target(aka slave) cluster > > On slave cluster, this is no existing API/methods to list all the tables > replicated to this cluster. > Here is an example, and prototype for Master cluster > {code: title=hbase shell command:list_replicated_tables |borderStyle=solid} > hbase(main):001:0> list_replicated_tables > TABLE COLUMNFAMILY TARGET_CLUSTER > scores coursehdtest017.svl.ibm.com:2181:/hbase > t3_dn cf1 hdtest017.svl.ibm.com:2181:/hbase > usertable familyhdtest017.svl.ibm.com:2181:/hbase > 3 row(s) in 0.3380 seconds > {code} > {code: title=method to return all columnfamilies replicated from this cluster > |borderStyle=solid} > /** > * ReplicationAdmin.listRepllicated >* @return List of the replicated columnfamilies of this cluster for > display. >* @throws IOException > */ > public List listReplicated() throws IOException { > List replicatedColFams = new ArrayList(); > > HTableDescriptor[] tables; > > tables= this.connection.listTables(); > > Map peers = listPeers(); > > for (HTableDescriptor table:tables) { > HColumnDescriptor[] columns = table.getColumnFamilies(); > String tableName = table.getNameAsString(); > for (HColumnDescriptor column: columns) { > int scope = column.getScope(); > > if (scope!=0) { > String[] replicatedEntry = new String[3]; > replicatedEntry[0] = tableName; > replicatedEntry[1] = column.getNameAsString(); > replicatedEntry[2] = peers.get(Integer.toString(scope)); > replicatedColFams.add(replicatedEntry); > } > } > } > > return replicatedColFams; > } > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6143) Make region assignment smarter when regions are re-enabled.
[ https://issues.apache.org/jira/browse/HBASE-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-6143: - Priority: Critical (was: Major) > Make region assignment smarter when regions are re-enabled. > --- > > Key: HBASE-6143 > URL: https://issues.apache.org/jira/browse/HBASE-6143 > Project: HBase > Issue Type: Improvement >Reporter: Elliott Clark >Assignee: Ted Yu >Priority: Critical > Attachments: 6143-v1.txt, 6143-v2.txt, 6143-v3.txt, HBASE-6143-0.patch > > > Right now a random region server is picked when re-enabling a table. This > could be much smarter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6143) Make region assignment smarter when regions are re-enabled.
[ https://issues.apache.org/jira/browse/HBASE-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722066#comment-13722066 ] Lars Hofhansl commented on HBASE-6143: -- We should probably be doing different patches in 0.94 and 0.95+. Generally, loosing data locality after a simple disable/enable is absolutely unacceptable! > Make region assignment smarter when regions are re-enabled. > --- > > Key: HBASE-6143 > URL: https://issues.apache.org/jira/browse/HBASE-6143 > Project: HBase > Issue Type: Improvement >Reporter: Elliott Clark >Assignee: Ted Yu >Priority: Critical > Attachments: 6143-v1.txt, 6143-v2.txt, 6143-v3.txt, HBASE-6143-0.patch > > > Right now a random region server is picked when re-enabling a table. This > could be much smarter. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8974) bin/rolling-restart.sh restarts all active RS's with each iteration instead of one at a time
[ https://issues.apache.org/jira/browse/HBASE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-8974: - Priority: Critical (was: Major) Fix Version/s: 0.95.2 Rolling restart is an essential building block. Issues in it are critical > bin/rolling-restart.sh restarts all active RS's with each iteration instead > of one at a time > > > Key: HBASE-8974 > URL: https://issues.apache.org/jira/browse/HBASE-8974 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Nick Dimiduk >Priority: Critical > Fix For: 0.95.2 > > > I'm exercising the patch over on HBASE-8803 and I've noticed something in the > logs: it looks like {{rolling-restart.sh}} is restarting all the region > servers multiple times instead of just the current entry in the loop > iteration. > The logic looks like this: > {noformat} > for each rs in active region server list: > unload $rs // move all regions to other RS's > restart all Region Servers // !?! bug? > reload $rs // pile 'em back on > {noformat} > Shouldn't that step 2 be only {{restart $rs}}? > This is what I see in the logs. My cluster has 9 active RegionServers. Notice > the bit in the middle where all 9 are stopped and started again after > unloading the target RS. > {noformat} > $ time /usr/lib/hbase/bin/rolling-restart.sh --rs-only --graceful > --maxthreads 30 > > Gracefully restarting: hor18n39.gq1.ygridcore.net > Disabling balancer! > ... > Unloading hor18n39.gq1.ygridcore.net region(s) > ... > Valid region move targets: > hor18n37.gq1.ygridcore.net,60020,1374094975268 > hor17n37.gq1.ygridcore.net,60020,1374094975264 > hor18n35.gq1.ygridcore.net,60020,1374094975327 > hor17n39.gq1.ygridcore.net,60020,1374094975281 > hor18n36.gq1.ygridcore.net,60020,1374094975254 > hor17n36.gq1.ygridcore.net,60020,1374094975277 > hor17n34.gq1.ygridcore.net,60020,1374094975291 > hor18n38.gq1.ygridcore.net,60020,1374094975259 > 13/07/17 21:44:38 INFO region_mover: Moving 330 region(s) from > hor18n39.gq1.ygridcore.net,60020,1374094975326 during this cycle > 13/07/17 21:44:38 INFO region_mover: Moving region > b59050cf97aabcef838e3c50e93e6d13 (1 of 330) to > server=hor18n37.gq1.ygridcore.net,60020,1374094975268 > ... > 13/07/17 21:54:20 INFO region_mover: Moving region > d00026d7cc396bb3e6ea91106cc6ab55 (329 of 330) to > server=hor18n37.gq1.ygridcore.net,60020,1374094975268 > 13/07/17 21:54:20 INFO region_mover: Moving region > a722179b33e6ece8c9cee3fba3056acd (330 of 330) to > server=hor17n37.gq1.ygridcore.net,60020,1374094975264 > 13/07/17 21:54:21 INFO region_mover: Wrote list of moved regions to > /tmp/hor18n39.gq1.ygridcore.net > Unloaded hor18n39.gq1.ygridcore.net region(s) > hor18n35.gq1.ygridcore.net: stopping regionserver. > hor17n39.gq1.ygridcore.net: stopping regionserver. > hor18n36.gq1.ygridcore.net: stopping regionserver. > hor17n37.gq1.ygridcore.net: stopping regionserver. > hor17n34.gq1.ygridcore.net: stopping regionserver. > hor18n38.gq1.ygridcore.net: stopping regionserver. > hor18n37.gq1.ygridcore.net: stopping regionserver. > hor17n36.gq1.ygridcore.net: stopping regionserver. > hor18n39.gq1.ygridcore.net: stopping regionserver. > hor18n36.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n36.gq1.ygridcore.net.out > hor17n36.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n36.gq1.ygridcore.net.out > hor17n37.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n37.gq1.ygridcore.net.out > hor18n37.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n37.gq1.ygridcore.net.out > hor18n38.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n38.gq1.ygridcore.net.out > hor17n34.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n34.gq1.ygridcore.net.out > hor18n35.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n35.gq1.ygridcore.net.out > hor18n39.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n39.gq1.ygridcore.net.out > hor17n39.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n39.gq1.ygridcore.net.out > Reloading hor18n39.gq1.ygridcore.net region(s) > ... > 13/07/17 21:54:27 INFO region_mover: Moving 330 regions to > hor18n39.gq1.ygridcore.net,600
[jira] [Updated] (HBASE-9035) Incorrect example for using a scan stopRow in HBase book
[ https://issues.apache.org/jira/browse/HBASE-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9035: - Resolution: Fixed Fix Version/s: 0.98.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Applied to trunk. Will show on site next time I roll the build out. Thanks for the patch [~gabriel.reid] > Incorrect example for using a scan stopRow in HBase book > > > Key: HBASE-9035 > URL: https://issues.apache.org/jira/browse/HBASE-9035 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Gabriel Reid > Fix For: 0.98.0 > > Attachments: HBASE-9035.patch > > > The example of how to use a stop row in a scan in the Section 5.7.3 of the > HBase book [1] is incorrect. It demonstrates using a start and stop row to > only retrieve records with a given prefix, creating the stop row by appending > a null byte to the start row. > This creates a scan that does not include any of the target rows, because the > the stop row is less than the target rows via lexicographical sorting. > [1] http://hbase.apache.org/book/data_model_operations.html#scan -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7980) TestZKInterProcessReadWriteLock fails occasionally in QA test run
[ https://issues.apache.org/jira/browse/HBASE-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722060#comment-13722060 ] stack commented on HBASE-7980: -- Here too: https://builds.apache.org/job/hbase-0.95/372/testReport/junit/org.apache.hadoop.hbase.zookeeper.lock/TestZKInterProcessReadWriteLock/testReadLockExcludesWriters/ > TestZKInterProcessReadWriteLock fails occasionally in QA test run > - > > Key: HBASE-7980 > URL: https://issues.apache.org/jira/browse/HBASE-7980 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Ted Yu > Fix For: 0.95.2 > > > {code} > testReadLockExcludesWriters(org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock) > Time elapsed: 0.003 sec <<< ERROR! > java.lang.Exception: test timed out after 3000 milliseconds > at sun.misc.Unsafe.park(Native Method) > {code} > You can find the test output here: > https://builds.apache.org/job/PreCommit-HBASE-Build/4634/artifact/trunk/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock-output.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HBASE-7980) TestZKInterProcessReadWriteLock fails occasionally in QA test run
[ https://issues.apache.org/jira/browse/HBASE-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-7980: -- Let me reopen this one to keep account of failings. See here: https://builds.apache.org/job/HBase-TRUNK/4306/testReport/junit/org.apache.hadoop.hbase.zookeeper.lock/TestZKInterProcessReadWriteLock/testWriteLockExcludesWriters/ > TestZKInterProcessReadWriteLock fails occasionally in QA test run > - > > Key: HBASE-7980 > URL: https://issues.apache.org/jira/browse/HBASE-7980 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu > > {code} > testReadLockExcludesWriters(org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock) > Time elapsed: 0.003 sec <<< ERROR! > java.lang.Exception: test timed out after 3000 milliseconds > at sun.misc.Unsafe.park(Native Method) > {code} > You can find the test output here: > https://builds.apache.org/job/PreCommit-HBASE-Build/4634/artifact/trunk/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock-output.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7980) TestZKInterProcessReadWriteLock fails occasionally in QA test run
[ https://issues.apache.org/jira/browse/HBASE-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7980: - Component/s: test Fix Version/s: 0.95.2 > TestZKInterProcessReadWriteLock fails occasionally in QA test run > - > > Key: HBASE-7980 > URL: https://issues.apache.org/jira/browse/HBASE-7980 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Ted Yu > Fix For: 0.95.2 > > > {code} > testReadLockExcludesWriters(org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock) > Time elapsed: 0.003 sec <<< ERROR! > java.lang.Exception: test timed out after 3000 milliseconds > at sun.misc.Unsafe.park(Native Method) > {code} > You can find the test output here: > https://builds.apache.org/job/PreCommit-HBASE-Build/4634/artifact/trunk/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock-output.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9069) estThriftServerCmdLine.testRunThriftServer[18] fails
stack created HBASE-9069: Summary: estThriftServerCmdLine.testRunThriftServer[18] fails Key: HBASE-9069 URL: https://issues.apache.org/jira/browse/HBASE-9069 Project: HBase Issue Type: Bug Components: test Reporter: stack Fix For: 0.95.2 This is second time I've seen this fail. Anyone want to take a look? I am filing this as placeholder to keep account of failures. https://builds.apache.org/job/HBase-TRUNK/4306/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServerCmdLine/testRunThriftServer_18_/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9041) TestFlushSnapshotFromClient.testConcurrentSnapshottingAttempts fails
[ https://issues.apache.org/jira/browse/HBASE-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722057#comment-13722057 ] stack commented on HBASE-9041: -- Is this failure related [~mbertozzi] https://builds.apache.org/job/HBase-TRUNK/4306/testReport/junit/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testConcurrentSnapshottingAttempts/ > TestFlushSnapshotFromClient.testConcurrentSnapshottingAttempts fails > > > Key: HBASE-9041 > URL: https://issues.apache.org/jira/browse/HBASE-9041 > Project: HBase > Issue Type: Bug > Components: snapshots, test >Reporter: stack >Assignee: Matteo Bertozzi >Priority: Critical > Fix For: 0.95.2 > > Attachments: HBASE-9041-v0.patch, lessrows.txt, uppingrows.txt > > > Assigning Matteo to take a look (give back to me if you don't have time boss). > Failed here: > https://builds.apache.org/job/HBase-TRUNK/4293/testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testConcurrentSnapshottingAttempts/ > Yesterday, it failed in a different place and for different reason: > https://builds.apache.org/view/H-L/view/HBase/job/hbase-0.95/352/testReport/junit/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testFlushTableSnapshot/ > The latter test fail was noted on tail of HBASE-8984. There I speculate that > its the 'load' of 400. I don't think the load reporting is correct. Will > dig in on that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8939) Hanging unit tests
[ https://issues.apache.org/jira/browse/HBASE-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722056#comment-13722056 ] stack commented on HBASE-8939: -- https://builds.apache.org/job/HBase-TRUNK/4307/consoleText failed. If I compare it to https://builds.apache.org/job/HBase-TRUNK/4308/consoleText, the difference is TestDistributedLogSplitting. All tests have timeouts on them. This test does not show on the end as a zombie. This test in the bad build has no outputs in the surefire list here: https://builds.apache.org/job/HBase-TRUNK/4307/artifact/trunk/hbase-server/target/surefire-reports/ I'm kinda stumped on how to deal other than remove the test. Let me keep an eye on it. If it shows up again like t his, will kill it. > Hanging unit tests > -- > > Key: HBASE-8939 > URL: https://issues.apache.org/jira/browse/HBASE-8939 > Project: HBase > Issue Type: Bug > Components: test >Reporter: stack > Fix For: 0.95.2 > > Attachments: 8939.txt > > > We have hanging tests. Here's a few from this morning's review: > {code} > durruti:0.95 stack$ ./dev-support/findHangingTest.sh > https://builds.apache.org/job/hbase-0.95-on-hadoop2/176/consoleText > % Total% Received % Xferd Average Speed TimeTime Time > Current > Dload Upload Total SpentLeft Speed > 100 3300k0 3300k0 0 508k 0 --:--:-- 0:00:06 --:--:-- 621k > Hanging test: Running org.apache.hadoop.hbase.TestIOFencing > Hanging test: Running org.apache.hadoop.hbase.regionserver.wal.TestLogRolling > {code} > And... > {code} > durruti:0.95 stack$ ./dev-support/findHangingTest.sh > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/396/consoleText > % Total% Received % Xferd Average Speed TimeTime Time > Current > Dload Upload Total SpentLeft Speed > 100 779k0 779k0 0 538k 0 --:--:-- 0:00:01 --:--:-- 559k > Hanging test: Running org.apache.hadoop.hbase.TestIOFencing > Hanging test: Running > org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort > Hanging test: Running org.apache.hadoop.hbase.client.TestFromClientSide3 > {code} > and > {code} > durruti:0.95 stack$ ./dev-support/findHangingTest.sh > http://54.241.6.143/job/HBase-0.95/607/consoleText > % Total% Received % Xferd Average Speed TimeTime Time > Current > Dload Upload Total SpentLeft Speed > 100 445k0 445k0 0 490k 0 --:--:-- --:--:-- --:--:-- 522k > Hanging test: Running > org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer > Hanging test: Running org.apache.hadoop.hbase.master.TestAssignmentManager > Hanging test: Running org.apache.hadoop.hbase.util.TestHBaseFsck > Hanging test: Running > org.apache.hadoop.hbase.regionserver.TestStoreFileBlockCacheSummary > Hanging test: Running > org.apache.hadoop.hbase.IntegrationTestDataIngestSlowDeterministic > {code} > and... > {code} > durruti:0.95 stack$ ./dev-support/findHangingTest.sh > http://54.241.6.143/job/HBase-0.95-Hadoop-2/607/consoleText > % Total% Received % Xferd Average Speed TimeTime Time > Current > Dload Upload Total SpentLeft Speed > 100 781k0 781k0 0 240k 0 --:--:-- 0:00:03 --:--:-- 244k > Hanging test: Running > org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint > Hanging test: Running org.apache.hadoop.hbase.client.TestFromClientSide > Hanging test: Running org.apache.hadoop.hbase.TestIOFencing > Hanging test: Running > org.apache.hadoop.hbase.master.TestMasterFailoverBalancerPersistence > Hanging test: Running > org.apache.hadoop.hbase.master.TestDistributedLogSplitting > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9068) Make hadoop 2 the default precommit for trunk
[ https://issues.apache.org/jira/browse/HBASE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722053#comment-13722053 ] stack commented on HBASE-9068: -- This initiative was someone elses' [~ted_yu] You've been asked in the past not to file issues on 'behalf' of others yet you continue to do so. Please stop. > Make hadoop 2 the default precommit for trunk > - > > Key: HBASE-9068 > URL: https://issues.apache.org/jira/browse/HBASE-9068 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.98.0 > > Attachments: 9068-v1.txt > > > Here is discussion thread: > http://search-hadoop.com/m/ggc1019WdVA/Making+hadoop+2+the+default+precommit&subj=Re+DISCUSS+Making+hadoop+2+the+default+precommit+for+trunk+ones+we+get+green+builds > Jenkins builds have been stable recently: > https://builds.apache.org/job/HBase-TRUNK/ > https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/ > We should run test suite against hadoop 2 in PreCommit build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9068) Make hadoop 2 the default precommit for trunk
[ https://issues.apache.org/jira/browse/HBASE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722047#comment-13722047 ] Hadoop QA commented on HBASE-9068: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594607/9068-v1.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6503//console This message is automatically generated. > Make hadoop 2 the default precommit for trunk > - > > Key: HBASE-9068 > URL: https://issues.apache.org/jira/browse/HBASE-9068 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.98.0 > > Attachments: 9068-v1.txt > > > Here is discussion thread: > http://search-hadoop.com/m/ggc1019WdVA/Making+hadoop+2+the+default+precommit&subj=Re+DISCUSS+Making+hadoop+2+the+default+precommit+for+trunk+ones+we+get+green+builds > Jenkins builds have been stable recently: > https://builds.apache.org/job/HBase-TRUNK/ > https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/ > We should run test suite against hadoop 2 in PreCommit build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9068) Make hadoop 2 the default precommit for trunk
[ https://issues.apache.org/jira/browse/HBASE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9068: -- Attachment: 9068-v1.txt > Make hadoop 2 the default precommit for trunk > - > > Key: HBASE-9068 > URL: https://issues.apache.org/jira/browse/HBASE-9068 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9068-v1.txt > > > Here is discussion thread: > http://search-hadoop.com/m/ggc1019WdVA/Making+hadoop+2+the+default+precommit&subj=Re+DISCUSS+Making+hadoop+2+the+default+precommit+for+trunk+ones+we+get+green+builds > Jenkins builds have been stable recently: > https://builds.apache.org/job/HBase-TRUNK/ > https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/ > We should run test suite against hadoop 2 in PreCommit build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-9068) Make hadoop 2 the default precommit for trunk
[ https://issues.apache.org/jira/browse/HBASE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-9068: - Assignee: Ted Yu > Make hadoop 2 the default precommit for trunk > - > > Key: HBASE-9068 > URL: https://issues.apache.org/jira/browse/HBASE-9068 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9068-v1.txt > > > Here is discussion thread: > http://search-hadoop.com/m/ggc1019WdVA/Making+hadoop+2+the+default+precommit&subj=Re+DISCUSS+Making+hadoop+2+the+default+precommit+for+trunk+ones+we+get+green+builds > Jenkins builds have been stable recently: > https://builds.apache.org/job/HBase-TRUNK/ > https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/ > We should run test suite against hadoop 2 in PreCommit build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9068) Make hadoop 2 the default precommit for trunk
[ https://issues.apache.org/jira/browse/HBASE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9068: -- Fix Version/s: 0.98.0 Status: Patch Available (was: Open) > Make hadoop 2 the default precommit for trunk > - > > Key: HBASE-9068 > URL: https://issues.apache.org/jira/browse/HBASE-9068 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 0.98.0 > > Attachments: 9068-v1.txt > > > Here is discussion thread: > http://search-hadoop.com/m/ggc1019WdVA/Making+hadoop+2+the+default+precommit&subj=Re+DISCUSS+Making+hadoop+2+the+default+precommit+for+trunk+ones+we+get+green+builds > Jenkins builds have been stable recently: > https://builds.apache.org/job/HBase-TRUNK/ > https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/ > We should run test suite against hadoop 2 in PreCommit build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8846) Revert the package name change for TableExistsException
[ https://issues.apache.org/jira/browse/HBASE-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722032#comment-13722032 ] Ted Yu commented on HBASE-8846: --- Looking at the tail of https://builds.apache.org/job/PreCommit-HBASE-Build/6502/console , it seemed that there might be compilation error. Hence there was no post back by Hadoop QA. > Revert the package name change for TableExistsException > --- > > Key: HBASE-8846 > URL: https://issues.apache.org/jira/browse/HBASE-8846 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.0 >Reporter: Devaraj Das >Assignee: Devaraj Das > Fix For: 0.95.2 > > Attachments: 8846-1.txt > > > I was going through the code changes that were needed for getting an > application that was running with hbase-0.92 run with hbase-0.95. > TableExistsException's package has changed - hence, needs a code change in > the application. Offline discussion with some folks led us to believe that > this change can probably be reverted back. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722030#comment-13722030 ] Aditya Kishore commented on HBASE-9032: --- Jean-Marc, With my patch, r1.getBytes() is always going to be not null (unless someone modifies it). What I was going for is verification that deserialization is in sync with serialization, i.e. you can create a valid Result object using the returned ImmutableBytesWritable and that both are equal. compareResults() throws exception if both the Result objects are not equal. > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9064) test-patch.sh would silently fail if compilation against hadoop 1.0 fails
[ https://issues.apache.org/jira/browse/HBASE-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722029#comment-13722029 ] Ted Yu commented on HBASE-9064: --- Integrated to trunk. Thanks for the review, Anoop. > test-patch.sh would silently fail if compilation against hadoop 1.0 fails > - > > Key: HBASE-9064 > URL: https://issues.apache.org/jira/browse/HBASE-9064 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: 9064-v1.txt > > > {code} > if [[ $? != 0 ]] ; then > JIRA_COMMENT="$JIRA_COMMENT > {color:red}-1 hadoop1.0{color}. The patch failed to compile against the > hadoop 1.0 profile." > cleanupAndExit 1 > fi > {code} > There is currently no post back to JIRA when there is compilation error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8496) Implement tags and the internals of how a tag should look like
[ https://issues.apache.org/jira/browse/HBASE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-8496: -- Priority: Critical (was: Major) Making this critical because HBASE-6222 is blocked by it. RM: Please feel free to change this back if you feel otherwise. > Implement tags and the internals of how a tag should look like > -- > > Key: HBASE-8496 > URL: https://issues.apache.org/jira/browse/HBASE-8496 > Project: HBase > Issue Type: New Feature >Affects Versions: 0.98.0, 0.95.2 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Critical > Attachments: Comparison.pdf, HBASE-8496_2.patch, HBASE-8496.patch, > Tag design.pdf, Tag_In_KV_Buffer_For_reference.patch > > > The intent of this JIRA comes from HBASE-7897. > This would help us to decide on the structure and format of how the tags > should look like. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9064) test-patch.sh would silently fail if compilation against hadoop 1.0 fails
[ https://issues.apache.org/jira/browse/HBASE-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722020#comment-13722020 ] Anoop Sam John commented on HBASE-9064: --- +1 > test-patch.sh would silently fail if compilation against hadoop 1.0 fails > - > > Key: HBASE-9064 > URL: https://issues.apache.org/jira/browse/HBASE-9064 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: 9064-v1.txt > > > {code} > if [[ $? != 0 ]] ; then > JIRA_COMMENT="$JIRA_COMMENT > {color:red}-1 hadoop1.0{color}. The patch failed to compile against the > hadoop 1.0 profile." > cleanupAndExit 1 > fi > {code} > There is currently no post back to JIRA when there is compilation error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9050) HBaseClient#call could hang
[ https://issues.apache.org/jira/browse/HBASE-9050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722017#comment-13722017 ] stack commented on HBASE-9050: -- [~jxiang] Yes > HBaseClient#call could hang > --- > > Key: HBASE-9050 > URL: https://issues.apache.org/jira/browse/HBASE-9050 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.10 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: 0.94-9050.patch > > > In HBaseClient#call, we have > {code} > connection.sendParam(call); // send the parameter > boolean interrupted = false; > //noinspection SynchronizationOnLocalVariableOrMethodParameter > synchronized (call) { > while (!call.done) { > try { > call.wait(); // wait for the result > {code} > sendParam could do nothing if the connection is closed right after the call > is added into the queue. Since the connection is closed, we won't get any > response, therefore, we won't get any notify call. So we will keep waiting > here for something won't happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8846) Revert the package name change for TableExistsException
[ https://issues.apache.org/jira/browse/HBASE-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-8846: --- Attachment: 8846-1.txt Here you go [~stack]. This is untested but compiles. I'll let hadoopqa look at it now. > Revert the package name change for TableExistsException > --- > > Key: HBASE-8846 > URL: https://issues.apache.org/jira/browse/HBASE-8846 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.0 >Reporter: Devaraj Das > Fix For: 0.95.2 > > Attachments: 8846-1.txt > > > I was going through the code changes that were needed for getting an > application that was running with hbase-0.92 run with hbase-0.95. > TableExistsException's package has changed - hence, needs a code change in > the application. Offline discussion with some folks led us to believe that > this change can probably be reverted back. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8846) Revert the package name change for TableExistsException
[ https://issues.apache.org/jira/browse/HBASE-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-8846: --- Assignee: Devaraj Das Status: Patch Available (was: Open) > Revert the package name change for TableExistsException > --- > > Key: HBASE-8846 > URL: https://issues.apache.org/jira/browse/HBASE-8846 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.0 >Reporter: Devaraj Das >Assignee: Devaraj Das > Fix For: 0.95.2 > > Attachments: 8846-1.txt > > > I was going through the code changes that were needed for getting an > application that was running with hbase-0.92 run with hbase-0.95. > TableExistsException's package has changed - hence, needs a code change in > the application. Offline discussion with some folks led us to believe that > this change can probably be reverted back. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9068) Make hadoop 2 the default precommit for trunk
Ted Yu created HBASE-9068: - Summary: Make hadoop 2 the default precommit for trunk Key: HBASE-9068 URL: https://issues.apache.org/jira/browse/HBASE-9068 Project: HBase Issue Type: Test Reporter: Ted Yu Here is discussion thread: http://search-hadoop.com/m/ggc1019WdVA/Making+hadoop+2+the+default+precommit&subj=Re+DISCUSS+Making+hadoop+2+the+default+precommit+for+trunk+ones+we+get+green+builds Jenkins builds have been stable recently: https://builds.apache.org/job/HBase-TRUNK/ https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/ We should run test suite against hadoop 2 in PreCommit build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9055) HBaseAdmin#isTableEnabled() should check table existence
[ https://issues.apache.org/jira/browse/HBASE-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721990#comment-13721990 ] Hadoop QA commented on HBASE-9055: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594593/9055-v3.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6501//console This message is automatically generated. > HBaseAdmin#isTableEnabled() should check table existence > > > Key: HBASE-9055 > URL: https://issues.apache.org/jira/browse/HBASE-9055 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.1 >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9055-v1.txt, 9055-v2.txt, 9055-v3.txt > > > Currently HBaseAdmin#isTableEnabled() returns true for a table which doesn't > exist. > We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8755) A new write thread model for HLog to improve the overall HBase write throughput
[ https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721973#comment-13721973 ] Feng Honghua commented on HBASE-8755: - [~jmspaggi], thanks for your test, some questions about your test: Is it against real HDFS? how many data-nodes and RS? what's the write pressure(client number, write thread number)? what's the total throughput you get? Yes this jira aims for throughput improvement under write intensive load. It should be tested and verified under write intensive load against real cluster / HDFS environment. And as you can see this jira only refactors the write thread model rather than tuning any write sub-phase along the whole write path for any individual write request, no obvious improvement is expected for low/ordinary write pressure. If you have a real cluster environment with 4 data-nodes, it would be better to re-do the test chunhui/I did with the similar test configuration/load which are listed in detail in above comments. 1 client with 200 write threads is OK for pressing a single RS and 4 clients each with 200 write threads for pressing 4 RS. Thanks again. > A new write thread model for HLog to improve the overall HBase write > throughput > --- > > Key: HBASE-8755 > URL: https://issues.apache.org/jira/browse/HBASE-8755 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Feng Honghua > Attachments: HBASE-8755-0.94-V0.patch, HBASE-8755-0.94-V1.patch, > HBASE-8755-trunk-V0.patch, HBASE-8755-trunk-V1.patch > > > In current write model, each write handler thread (executing put()) will > individually go through a full 'append (hlog local buffer) => HLog writer > append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write, > which incurs heavy race condition on updateLock and flushLock. > The only optimization where checking if current syncTillHere > txid in > expectation for other thread help write/sync its own txid to hdfs and > omitting the write/sync actually help much less than expectation. > Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi > proposed a new write thread model for writing hdfs sequence file and the > prototype implementation shows a 4X improvement for throughput (from 17000 to > 7+). > I apply this new write thread model in HLog and the performance test in our > test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1 > RS, from 22000 to 7 for 5 RS), the 1 RS write throughput (1K row-size) > even beats the one of BigTable (Precolator published in 2011 says Bigtable's > write throughput then is 31002). I can provide the detailed performance test > results if anyone is interested. > The change for new write thread model is as below: > 1> All put handler threads append the edits to HLog's local pending buffer; > (it notifies AsyncWriter thread that there is new edits in local buffer) > 2> All put handler threads wait in HLog.syncer() function for underlying > threads to finish the sync that contains its txid; > 3> An single AsyncWriter thread is responsible for retrieve all the buffered > edits in HLog's local pending buffer and write to the hdfs > (hlog.writer.append); (it notifies AsyncFlusher thread that there is new > writes to hdfs that needs a sync) > 4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs > to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread > that sync watermark increases) > 5> An single AsyncNotifier thread is responsible for notifying all pending > put handler threads which are waiting in the HLog.syncer() function > 6> No LogSyncer thread any more (since there is always > AsyncWriter/AsyncFlusher threads do the same job it does) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9055) HBaseAdmin#isTableEnabled() should check table existence
[ https://issues.apache.org/jira/browse/HBASE-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-9055: -- Attachment: 9055-v3.txt Patch v3 aligns ZKTableReadOnly with logic of 0.94 > HBaseAdmin#isTableEnabled() should check table existence > > > Key: HBASE-9055 > URL: https://issues.apache.org/jira/browse/HBASE-9055 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.1 >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9055-v1.txt, 9055-v2.txt, 9055-v3.txt > > > Currently HBaseAdmin#isTableEnabled() returns true for a table which doesn't > exist. > We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8755) A new write thread model for HLog to improve the overall HBase write throughput
[ https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721960#comment-13721960 ] Jean-Marc Spaggiari commented on HBASE-8755: Ok. I did some other tries and here are the results. jmspaggi@hbasetest:~/hbase/hbase-$ cat output-1.1.2.txt 421428.8 jmspaggi@hbasetest:~/hbase/hbase-$ cat output-1.1.2-8755.txt 427172.1 jmspaggi@hbasetest:~/hbase/hbase-$ cat output-1.2.0.txt 419673.3 jmspaggi@hbasetest:~/hbase/hbase-$ cat output-1.2.0-8755.txt 432413.9 This is elapse time. Between each iteration I totally delete (rm -rf) the hadoop directories, stop all the java processes, etc. Test is 10M randomWrite. So unfortunately I have not been able to see any real improvement. For YCSB, any specific load I should run to be able to see something better that without 8755? I guess it's a write intensive load that we want? Also, I have tested this on a pseudo-distributed instance (no more a standalone one), but I can dedicate 4 nodes to a test if required... > A new write thread model for HLog to improve the overall HBase write > throughput > --- > > Key: HBASE-8755 > URL: https://issues.apache.org/jira/browse/HBASE-8755 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Feng Honghua > Attachments: HBASE-8755-0.94-V0.patch, HBASE-8755-0.94-V1.patch, > HBASE-8755-trunk-V0.patch, HBASE-8755-trunk-V1.patch > > > In current write model, each write handler thread (executing put()) will > individually go through a full 'append (hlog local buffer) => HLog writer > append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write, > which incurs heavy race condition on updateLock and flushLock. > The only optimization where checking if current syncTillHere > txid in > expectation for other thread help write/sync its own txid to hdfs and > omitting the write/sync actually help much less than expectation. > Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi > proposed a new write thread model for writing hdfs sequence file and the > prototype implementation shows a 4X improvement for throughput (from 17000 to > 7+). > I apply this new write thread model in HLog and the performance test in our > test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1 > RS, from 22000 to 7 for 5 RS), the 1 RS write throughput (1K row-size) > even beats the one of BigTable (Precolator published in 2011 says Bigtable's > write throughput then is 31002). I can provide the detailed performance test > results if anyone is interested. > The change for new write thread model is as below: > 1> All put handler threads append the edits to HLog's local pending buffer; > (it notifies AsyncWriter thread that there is new edits in local buffer) > 2> All put handler threads wait in HLog.syncer() function for underlying > threads to finish the sync that contains its txid; > 3> An single AsyncWriter thread is responsible for retrieve all the buffered > edits in HLog's local pending buffer and write to the hdfs > (hlog.writer.append); (it notifies AsyncFlusher thread that there is new > writes to hdfs that needs a sync) > 4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs > to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread > that sync watermark increases) > 5> An single AsyncNotifier thread is responsible for notifying all pending > put handler threads which are waiting in the HLog.syncer() function > 6> No LogSyncer thread any more (since there is always > AsyncWriter/AsyncFlusher threads do the same job it does) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9050) HBaseClient#call could hang
[ https://issues.apache.org/jira/browse/HBASE-9050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721955#comment-13721955 ] Jimmy Xiang commented on HBASE-9050: [~saint@gmail.com], are you ok with 1000ms, since it is rare? > HBaseClient#call could hang > --- > > Key: HBASE-9050 > URL: https://issues.apache.org/jira/browse/HBASE-9050 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.10 >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Minor > Attachments: 0.94-9050.patch > > > In HBaseClient#call, we have > {code} > connection.sendParam(call); // send the parameter > boolean interrupted = false; > //noinspection SynchronizationOnLocalVariableOrMethodParameter > synchronized (call) { > while (!call.done) { > try { > call.wait(); // wait for the result > {code} > sendParam could do nothing if the connection is closed right after the call > is added into the queue. Since the connection is closed, we won't get any > response, therefore, we won't get any notify call. So we will keep waiting > here for something won't happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9035) Incorrect example for using a scan stopRow in HBase book
[ https://issues.apache.org/jira/browse/HBASE-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721954#comment-13721954 ] Jean-Marc Spaggiari commented on HBASE-9035: Hi [~saint@gmail.com], I'm fine with this. There is other ways to achieve the same thing, but at least that gives people some ideas on how to do it. > Incorrect example for using a scan stopRow in HBase book > > > Key: HBASE-9035 > URL: https://issues.apache.org/jira/browse/HBASE-9035 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Gabriel Reid > Attachments: HBASE-9035.patch > > > The example of how to use a stop row in a scan in the Section 5.7.3 of the > HBase book [1] is incorrect. It demonstrates using a start and stop row to > only retrieve records with a given prefix, creating the stop row by appending > a null byte to the start row. > This creates a scan that does not include any of the target rows, because the > the stop row is less than the target rows via lexicographical sorting. > [1] http://hbase.apache.org/book/data_model_operations.html#scan -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8974) bin/rolling-restart.sh restarts all active RS's with each iteration instead of one at a time
[ https://issues.apache.org/jira/browse/HBASE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721953#comment-13721953 ] Jean-Marc Spaggiari commented on HBASE-8974: I tailed the RS logs over a restart and there is only one restart displayed: {code} dimanche 28 juillet 2013, 09:17:02 (UTC-0400) Terminating regionserver 2013-07-28 09:17:02,208 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020 2013-07-28 09:17:02,208 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60020 2013-07-28 09:17:02,208 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60020: exiting 2013-07-28 09:17:02,208 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020: exiting 2013-07-28 09:17:02,208 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60020: exiting 2013-07-28 09:17:02,208 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60030 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020: exiting 2013-07-28 09:17:02,209 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020: exiting 2013-07-28 09:17:02,312 INFO org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x3400251e47305dc dimanche 28 juillet 2013, 09:17:03 (UTC-0400) Starting regionserver on node3 core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 93921 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size(512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 93921 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 2013-07-28 09:17:03,676 INFO org.apache.hadoop.hbase.util.VersionInfo: HBase 0.94.10 2013-07-28 09:17:03,676 INFO org.apache.hadoop.hbase.util.VersionInfo: Subversion https://svn.apache.org/repos/asf/hbase/tags/0.94.10RC0 -r 1504995 2013-07-28 09:17:03,676 INFO org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri Jul 19 20:24:16 UTC 2013 2013-07-28 09:17:03,778 INFO org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=23.1-b03 2013-07-28 09:17:03,778 INFO org.apache.hadoop.hbase.util.ServerCommandLine: vmInputArguments=[-XX:OnOutOfMemoryError=kill -9 %p, -Xmx6196m, -XX:+UseConcMarkSweepGC, -XX:+UseConcMarkSweepGC, -Dhbase.log
[jira] [Comment Edited] (HBASE-8940) TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race in opening region
[ https://issues.apache.org/jira/browse/HBASE-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721935#comment-13721935 ] Jimmy Xiang edited comment on HBASE-8940 at 7/28/13 2:09 PM: - The above failed test is caused by {noformat} 2013-07-27 20:08:48,345 INFO [MASTER_TABLE_OPERATIONS-quirinus:59057-0] handler.DispatchMergingRegionHandler(170): Cancel merging regions testWholesomeMerge,,1374955684854.8b452e80a9e15d54bd265c344f4ad953., testWholesomeMerge,testRow002 0,1374955684854.4a31455f2b0256853c41c52ba65bdc10., because can't move them together after 30003ms {noformat} Timeout of moving regions together on same RS is caused by closing region "4a31455f2b0256853c41c52ba65bdc10" take more time than 30s, {noformat} 2013-07-27 20:08:18,753 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] handler.CloseRegionHandler(125): Processing close of testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10. 2013-07-27 20:08:18,755 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] regionserver.HRegion(1493): Started memstore flush for testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10., current region memstore size 3.4k 2013-07-27 20:09:03,914 INFO [RS_CLOSE_REGION-quirinus:57626-1] regionserver.HRegion(1637): Finished memstore flush of ~3.4k/3520, currentsize=0.0/0 for region testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc1 0. in 45159ms, sequenceid=186, compaction requested=false 2013-07-27 20:09:03,956 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] handler.CloseRegionHandler(177): Closed region testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10. {noformat} >From the above logs, closing region took 45s, it seems flushing is too slow, >but can't get the reason from the current logs, maybe GC or high IO load at >that time was (Author: zjushch): The above failed test is caused by {format} 2013-07-27 20:08:48,345 INFO [MASTER_TABLE_OPERATIONS-quirinus:59057-0] handler.DispatchMergingRegionHandler(170): Cancel merging regions testWholesomeMerge,,1374955684854.8b452e80a9e15d54bd265c344f4ad953., testWholesomeMerge,testRow002 0,1374955684854.4a31455f2b0256853c41c52ba65bdc10., because can't move them together after 30003ms {format} Timeout of moving regions together on same RS is caused by closing region "4a31455f2b0256853c41c52ba65bdc10" take more time than 30s, {format} 2013-07-27 20:08:18,753 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] handler.CloseRegionHandler(125): Processing close of testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10. 2013-07-27 20:08:18,755 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] regionserver.HRegion(1493): Started memstore flush for testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10., current region memstore size 3.4k 2013-07-27 20:09:03,914 INFO [RS_CLOSE_REGION-quirinus:57626-1] regionserver.HRegion(1637): Finished memstore flush of ~3.4k/3520, currentsize=0.0/0 for region testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc1 0. in 45159ms, sequenceid=186, compaction requested=false 2013-07-27 20:09:03,956 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] handler.CloseRegionHandler(177): Closed region testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10. {format} >From the above logs, closing region took 45s, it seems flushing is too slow, >but can't get the reason from the current logs, maybe GC or high IO load at >that time > TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race > in opening region > - > > Key: HBASE-8940 > URL: https://issues.apache.org/jira/browse/HBASE-8940 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: chunhui shen > Fix For: 0.95.2 > > Attachments: 8940-trunk-v2.patch, 8940-v1.txt, 8940v3.txt > > > From > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/org.apache.hbase$hbase-server/395/testReport/org.apache.hadoop.hbase.regionserver/TestRegionMergeTransactionOnCluster/testWholesomeMerge/ > : > {code} > 013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.RegionStates(309): Offlined 3ffefd878a234031675de6b2c70b2ead from > ip-10-174-118-204.us-west-1.compute.internal,60498,1373535184820 > 2013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.AssignmentManager$4(1223): The master has opened > testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead. > that was online on > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > zookeeper.ZKA
[jira] [Commented] (HBASE-8974) bin/rolling-restart.sh restarts all active RS's with each iteration instead of one at a time
[ https://issues.apache.org/jira/browse/HBASE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721947#comment-13721947 ] Jean-Marc Spaggiari commented on HBASE-8974: Trying right now. > bin/rolling-restart.sh restarts all active RS's with each iteration instead > of one at a time > > > Key: HBASE-8974 > URL: https://issues.apache.org/jira/browse/HBASE-8974 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Nick Dimiduk > > I'm exercising the patch over on HBASE-8803 and I've noticed something in the > logs: it looks like {{rolling-restart.sh}} is restarting all the region > servers multiple times instead of just the current entry in the loop > iteration. > The logic looks like this: > {noformat} > for each rs in active region server list: > unload $rs // move all regions to other RS's > restart all Region Servers // !?! bug? > reload $rs // pile 'em back on > {noformat} > Shouldn't that step 2 be only {{restart $rs}}? > This is what I see in the logs. My cluster has 9 active RegionServers. Notice > the bit in the middle where all 9 are stopped and started again after > unloading the target RS. > {noformat} > $ time /usr/lib/hbase/bin/rolling-restart.sh --rs-only --graceful > --maxthreads 30 > > Gracefully restarting: hor18n39.gq1.ygridcore.net > Disabling balancer! > ... > Unloading hor18n39.gq1.ygridcore.net region(s) > ... > Valid region move targets: > hor18n37.gq1.ygridcore.net,60020,1374094975268 > hor17n37.gq1.ygridcore.net,60020,1374094975264 > hor18n35.gq1.ygridcore.net,60020,1374094975327 > hor17n39.gq1.ygridcore.net,60020,1374094975281 > hor18n36.gq1.ygridcore.net,60020,1374094975254 > hor17n36.gq1.ygridcore.net,60020,1374094975277 > hor17n34.gq1.ygridcore.net,60020,1374094975291 > hor18n38.gq1.ygridcore.net,60020,1374094975259 > 13/07/17 21:44:38 INFO region_mover: Moving 330 region(s) from > hor18n39.gq1.ygridcore.net,60020,1374094975326 during this cycle > 13/07/17 21:44:38 INFO region_mover: Moving region > b59050cf97aabcef838e3c50e93e6d13 (1 of 330) to > server=hor18n37.gq1.ygridcore.net,60020,1374094975268 > ... > 13/07/17 21:54:20 INFO region_mover: Moving region > d00026d7cc396bb3e6ea91106cc6ab55 (329 of 330) to > server=hor18n37.gq1.ygridcore.net,60020,1374094975268 > 13/07/17 21:54:20 INFO region_mover: Moving region > a722179b33e6ece8c9cee3fba3056acd (330 of 330) to > server=hor17n37.gq1.ygridcore.net,60020,1374094975264 > 13/07/17 21:54:21 INFO region_mover: Wrote list of moved regions to > /tmp/hor18n39.gq1.ygridcore.net > Unloaded hor18n39.gq1.ygridcore.net region(s) > hor18n35.gq1.ygridcore.net: stopping regionserver. > hor17n39.gq1.ygridcore.net: stopping regionserver. > hor18n36.gq1.ygridcore.net: stopping regionserver. > hor17n37.gq1.ygridcore.net: stopping regionserver. > hor17n34.gq1.ygridcore.net: stopping regionserver. > hor18n38.gq1.ygridcore.net: stopping regionserver. > hor18n37.gq1.ygridcore.net: stopping regionserver. > hor17n36.gq1.ygridcore.net: stopping regionserver. > hor18n39.gq1.ygridcore.net: stopping regionserver. > hor18n36.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n36.gq1.ygridcore.net.out > hor17n36.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n36.gq1.ygridcore.net.out > hor17n37.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n37.gq1.ygridcore.net.out > hor18n37.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n37.gq1.ygridcore.net.out > hor18n38.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n38.gq1.ygridcore.net.out > hor17n34.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n34.gq1.ygridcore.net.out > hor18n35.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n35.gq1.ygridcore.net.out > hor18n39.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor18n39.gq1.ygridcore.net.out > hor17n39.gq1.ygridcore.net: starting regionserver, logging to > /grid/0/var/log/hbase/hbase-hbase-regionserver-hor17n39.gq1.ygridcore.net.out > Reloading hor18n39.gq1.ygridcore.net region(s) > ... > 13/07/17 21:54:27 INFO region_mover: Moving 330 regions to > hor18n39.gq1.ygridcore.net,60020,1374098064602 > 13/07/17 21:56:47 INFO region_mover: Moving region > 7d0a02f452c334a12026b45346a87d36
[jira] [Commented] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721942#comment-13721942 ] Jean-Marc Spaggiari commented on HBASE-9032: Nice to have a test! Thanks [~adityakishore]. However, I will have prefered to user Assert.assertNotNull(r1.getBytes()); instead of "Result r2 = new Result(r1.getBytes());". Because the only exception you are looking for is the NPE, if any other exception occurs you will still mark this test as failed. > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9060) ExportSnapshot job fails if target path contains percentage character
[ https://issues.apache.org/jira/browse/HBASE-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-9060: --- Resolution: Fixed Fix Version/s: 0.94.11 0.98.0 Status: Resolved (was: Patch Available) committed to 0.94, 0.95, trunk, thanks for the patch > ExportSnapshot job fails if target path contains percentage character > - > > Key: HBASE-9060 > URL: https://issues.apache.org/jira/browse/HBASE-9060 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 0.95.1, 0.94.10 >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Fix For: 0.98.0, 0.95.2, 0.94.11 > > Attachments: HBase-9060.patch > > > Here is the stack trace: > hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot > table1_snapshot -copy-to hdfs:///myhbase%2Cbackup/table1_snapshot > > {code} > 13/07/26 18:09:50 INFO mapred.JobClient: map 0% reduce 0% > 13/07/26 18:09:58 INFO mapred.JobClient: Task Id : > attempt_201307261804_0002_m_01_0, Status : FAILED > java.util.MissingFormatArgumentException: Format specifier ') from > family1/table1=3567d8ac6cfee83dfe81c346f139fb9c-c5bc120475a54d188f30d4b621d505b1 > to hdfs:/myhbase%2C' > at java.util.Formatter.getArgument(Formatter.java:592) > at java.util.Formatter.format(Formatter.java:561) > at java.util.Formatter.format(Formatter.java:510) > at java.lang.String.format(String.java:1977) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyData(ExportSnapshot.java:274) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyFile(ExportSnapshot.java:204) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:149) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:98) > {code} > The problem is this code in copyData(): > {code} > final String statusMessage = "copied %s/" + > StringUtils.humanReadableInt(inputFileSize) + >" (%.3f%%) from " + inputPath + " to " + > outputPath; > {code} > Since we don't know what the path may contain that may confuse the formatter, > we need to pull that part out of the format string. > Also the percentage completion math seems to be wrong in the same code. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9060) ExportSnapshot job fails if target path contains percentage character
[ https://issues.apache.org/jira/browse/HBASE-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-9060: --- Summary: ExportSnapshot job fails if target path contains percentage character (was: EXportSnapshot job fails if target path contains percentage character) > ExportSnapshot job fails if target path contains percentage character > - > > Key: HBASE-9060 > URL: https://issues.apache.org/jira/browse/HBASE-9060 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 0.95.1, 0.94.10 >Reporter: Jerry He >Assignee: Jerry He >Priority: Minor > Fix For: 0.95.2 > > Attachments: HBase-9060.patch > > > Here is the stack trace: > hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot > table1_snapshot -copy-to hdfs:///myhbase%2Cbackup/table1_snapshot > > {code} > 13/07/26 18:09:50 INFO mapred.JobClient: map 0% reduce 0% > 13/07/26 18:09:58 INFO mapred.JobClient: Task Id : > attempt_201307261804_0002_m_01_0, Status : FAILED > java.util.MissingFormatArgumentException: Format specifier ') from > family1/table1=3567d8ac6cfee83dfe81c346f139fb9c-c5bc120475a54d188f30d4b621d505b1 > to hdfs:/myhbase%2C' > at java.util.Formatter.getArgument(Formatter.java:592) > at java.util.Formatter.format(Formatter.java:561) > at java.util.Formatter.format(Formatter.java:510) > at java.lang.String.format(String.java:1977) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyData(ExportSnapshot.java:274) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyFile(ExportSnapshot.java:204) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:149) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:98) > {code} > The problem is this code in copyData(): > {code} > final String statusMessage = "copied %s/" + > StringUtils.humanReadableInt(inputFileSize) + >" (%.3f%%) from " + inputPath + " to " + > outputPath; > {code} > Since we don't know what the path may contain that may confuse the formatter, > we need to pull that part out of the format string. > Also the percentage completion math seems to be wrong in the same code. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9055) HBaseAdmin#isTableEnabled() should check table existence
[ https://issues.apache.org/jira/browse/HBASE-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721938#comment-13721938 ] chunhui shen commented on HBASE-9055: - In 0.94 version, isTableEnabled() returns false for a table which doesn't exist, but 0.95 return true, the related code is following: 0.95 {code} ZKTableReadOnly static ZooKeeperProtos.Table.State getTableState(final ZooKeeperWatcher zkw, final String child) throws KeeperException { String znode = ZKUtil.joinZNode(zkw.tableZNode, child); byte [] data = ZKUtil.getData(zkw, znode); if (data == null || data.length <= 0) return ZooKeeperProtos.Table.State.ENABLED; {code} 0.94 {code} static TableState getTableState(final ZooKeeperWatcher zkw, final String child) throws KeeperException { String znode = ZKUtil.joinZNode(zkw.tableZNode, child); byte [] data = ZKUtil.getData(zkw, znode); if (data == null || data.length <= 0) { // Null if table is enabled. return null; } {code} I don't know why the return value for null case is changed in above code. If the change is ok, +1 on the patch. > HBaseAdmin#isTableEnabled() should check table existence > > > Key: HBASE-9055 > URL: https://issues.apache.org/jira/browse/HBASE-9055 > Project: HBase > Issue Type: Bug >Affects Versions: 0.95.1 >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 9055-v1.txt, 9055-v2.txt > > > Currently HBaseAdmin#isTableEnabled() returns true for a table which doesn't > exist. > We should check table existence. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8940) TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race in opening region
[ https://issues.apache.org/jira/browse/HBASE-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721935#comment-13721935 ] chunhui shen commented on HBASE-8940: - The above failed test is caused by {format} 2013-07-27 20:08:48,345 INFO [MASTER_TABLE_OPERATIONS-quirinus:59057-0] handler.DispatchMergingRegionHandler(170): Cancel merging regions testWholesomeMerge,,1374955684854.8b452e80a9e15d54bd265c344f4ad953., testWholesomeMerge,testRow002 0,1374955684854.4a31455f2b0256853c41c52ba65bdc10., because can't move them together after 30003ms {format} Timeout of moving regions together on same RS is caused by closing region "4a31455f2b0256853c41c52ba65bdc10" take more time than 30s, {format} 2013-07-27 20:08:18,753 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] handler.CloseRegionHandler(125): Processing close of testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10. 2013-07-27 20:08:18,755 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] regionserver.HRegion(1493): Started memstore flush for testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10., current region memstore size 3.4k 2013-07-27 20:09:03,914 INFO [RS_CLOSE_REGION-quirinus:57626-1] regionserver.HRegion(1637): Finished memstore flush of ~3.4k/3520, currentsize=0.0/0 for region testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc1 0. in 45159ms, sequenceid=186, compaction requested=false 2013-07-27 20:09:03,956 DEBUG [RS_CLOSE_REGION-quirinus:57626-1] handler.CloseRegionHandler(177): Closed region testWholesomeMerge,testRow0020,1374955684854.4a31455f2b0256853c41c52ba65bdc10. {format} >From the above logs, closing region took 45s, it seems flushing is too slow, >but can't get the reason from the current logs, maybe GC or high IO load at >that time > TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail due to race > in opening region > - > > Key: HBASE-8940 > URL: https://issues.apache.org/jira/browse/HBASE-8940 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: chunhui shen > Fix For: 0.95.2 > > Attachments: 8940-trunk-v2.patch, 8940-v1.txt, 8940v3.txt > > > From > http://54.241.6.143/job/HBase-TRUNK-Hadoop-2/org.apache.hbase$hbase-server/395/testReport/org.apache.hadoop.hbase.regionserver/TestRegionMergeTransactionOnCluster/testWholesomeMerge/ > : > {code} > 013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.RegionStates(309): Offlined 3ffefd878a234031675de6b2c70b2ead from > ip-10-174-118-204.us-west-1.compute.internal,60498,1373535184820 > 2013-07-11 09:33:44,154 INFO [AM.ZK.Worker-pool-2-thread-2] > master.AssignmentManager$4(1223): The master has opened > testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead. > that was online on > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > zookeeper.ZKAssign(862): regionserver:59210-0x13fcd13a20c0002 Successfully > transitioned node 3ffefd878a234031675de6b2c70b2ead from RS_ZK_REGION_OPENING > to RS_ZK_REGION_OPENED > 2013-07-11 09:33:44,182 INFO > [MASTER_TABLE_OPERATIONS-ip-10-174-118-204:39405-0] > handler.DispatchMergingRegionHandler(154): Failed send MERGE REGIONS RPC to > server ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 for > region > testWholesomeMerge,,1373535210124.efcb10dcfa250e31bfd50dc6c7049f32.,testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead., > focible=false, org.apache.hadoop.hbase.exceptions.RegionOpeningException: > Region is being opened: 3ffefd878a234031675de6b2c70b2ead > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2566) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3862) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.mergeRegions(HRegionServer.java:3649) > at > org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:14400) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2124) > at > org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1831) > 2013-07-11 09:33:44,182 DEBUG [RS_OPEN_REGION-ip-10-174-118-204:59210-1] > handler.OpenRegionHandler(373): region transitioned to opened in zookeeper: > {ENCODED => 3ffefd878a234031675de6b2c70b2ead, NAME => > 'testWholesomeMerge,testRow0020,1373535210125.3ffefd878a234031675de6b2c70b2ead.', > STARTKEY => 'testRow0020', ENDKEY => 'testRow0040'}, server: > ip-10-174-118-204.us-west-1.compute.internal,59210,1373535184884 > 2013-07-11
[jira] [Updated] (HBASE-9032) Result.getBytes() returns null if backed by KeyValue array
[ https://issues.apache.org/jira/browse/HBASE-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aditya Kishore updated HBASE-9032: -- Attachment: HBASE-9032.patch Updating the patch with a test case. > Result.getBytes() returns null if backed by KeyValue array > -- > > Key: HBASE-9032 > URL: https://issues.apache.org/jira/browse/HBASE-9032 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore > Fix For: 0.94.11 > > Attachments: HBASE-9032.patch, HBASE-9032.patch, HBASE-9032.patch > > > This applies only to 0.94 (and earlier) branch. > If the Result object was constructed using either of Result(KeyValue[]) or > Result(List), calling Result.getBytes() returns null instead of the > serialized ImmutableBytesWritable object. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9031) ImmutableBytesWritable.toString() should downcast the bytes before converting to hex string
[ https://issues.apache.org/jira/browse/HBASE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721926#comment-13721926 ] Aditya Kishore commented on HBASE-9031: --- [~stack] I could not find a test or code which invokes this function except [here|http://svn.apache.org/viewvc/hbase/tags/0.95.1/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SimpleTotalOrderPartitioner.java?view=markup#l119] But this is mighty useful while debugging issues like HBASE-9032. BTW, the test result are unrelated. The test pass "org.apache.hadoop.hbase.TestDrainingServer" locally with the patch applied over https://svn.apache.org/repos/asf/hbase/trunk @revision 1507766. While "org.apache.hadoop.hdfs.TestModTime.testModTimePersistsAfterRestart" is little perplexing as this is not even an HBase unit test. > ImmutableBytesWritable.toString() should downcast the bytes before converting > to hex string > --- > > Key: HBASE-9031 > URL: https://issues.apache.org/jira/browse/HBASE-9031 > Project: HBase > Issue Type: Bug > Components: io >Affects Versions: 0.95.1, 0.94.9 >Reporter: Aditya Kishore >Assignee: Aditya Kishore >Priority: Minor > Fix For: 0.95.2 > > Attachments: HBASE-9031.patch, HBASE-9031.patch > > > The attached patch addresses few issues. > # We need only (3*this.length) capacity in ByteBuffer and not > (3*this.bytes.length). > # Do not calculate (offset + length) at every iteration. > # No test is required at every iteration to add space (' ') before every byte > other than the first one. Uses {{sb.substring(1)}} instead. > # Finally and most importantly (the original issue of this report), downcast > the promoted int (the parameter to {{Integer.toHexString()}}) to byte range. > Without #4, the byte array \{54,125,64, -1, -45\} is transformed to "36 7d 40 > ffd3" instead of "36 7d 40 ff d3". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira