[jira] [Updated] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS
[ https://issues.apache.org/jira/browse/HBASE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Kabra updated HBASE-8026: -- Fix Version/s: 0.98.8 Status: Patch Available (was: In Progress) HBase Shell docs for scan command don't reference VERSIONS -- Key: HBASE-8026 URL: https://issues.apache.org/jira/browse/HBASE-8026 Project: HBase Issue Type: Bug Reporter: Jonathan Natkins Assignee: Amit Kabra Labels: beginner Fix For: 0.98.8 Attachments: HBASE-8026.patch hbase(main):046:0 help 'scan' Scan a table; pass table name and optionally a dictionary of scanner specifications. Scanner specifications may include one or more of: TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH, or COLUMNS, CACHE VERSIONS should be mentioned somewhere here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS
[ https://issues.apache.org/jira/browse/HBASE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-8026 started by Amit Kabra. - HBase Shell docs for scan command don't reference VERSIONS -- Key: HBASE-8026 URL: https://issues.apache.org/jira/browse/HBASE-8026 Project: HBase Issue Type: Bug Reporter: Jonathan Natkins Assignee: Amit Kabra Labels: beginner Attachments: HBASE-8026.patch hbase(main):046:0 help 'scan' Scan a table; pass table name and optionally a dictionary of scanner specifications. Scanner specifications may include one or more of: TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH, or COLUMNS, CACHE VERSIONS should be mentioned somewhere here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS
[ https://issues.apache.org/jira/browse/HBASE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Kabra updated HBASE-8026: -- Attachment: HBASE-8026.patch HBase Shell docs for scan command don't reference VERSIONS -- Key: HBASE-8026 URL: https://issues.apache.org/jira/browse/HBASE-8026 Project: HBase Issue Type: Bug Reporter: Jonathan Natkins Assignee: Amit Kabra Labels: beginner Attachments: HBASE-8026.patch hbase(main):046:0 help 'scan' Scan a table; pass table name and optionally a dictionary of scanner specifications. Scanner specifications may include one or more of: TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH, or COLUMNS, CACHE VERSIONS should be mentioned somewhere here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-7013) Avoid reusing a input stream that stumbles on rpc-timeout in HBaseClient
[ https://issues.apache.org/jira/browse/HBASE-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7013: - Priority: Critical (was: Major) Avoid reusing a input stream that stumbles on rpc-timeout in HBaseClient Key: HBASE-7013 URL: https://issues.apache.org/jira/browse/HBASE-7013 Project: HBase Issue Type: Bug Components: Client Reporter: Hiroshi Ikeda Priority: Critical Labels: delete Fix For: 2.0.0 HBASE-2937 introduces rpc-timeout and sets SO_TIMEOUT parameter of the socket to throw SocketTimeoutException. That means the exception can be thrown from any code that reads data directly/indirectly from the socket. If the exception is thrown in the middle of reading a set of data, it is required to drag out and drop the rest part of the set of data from the socket and make ready to read the next data, in order to reuse the socket. It seems difficult, and I can't find such recovering code in HBaseClient. I think, when IO streams wrapping the socket throw an exception, the nesting connection instance should be discarded, and rpc-timeout should be handled separately from SO_TIMEOUT parameter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS
[ https://issues.apache.org/jira/browse/HBASE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272432#comment-14272432 ] Hadoop QA commented on HBASE-8026: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12691486/HBASE-8026.patch against master branch at commit 988cba762a6d99b4b512ff97a607891f4b82f7dc. ATTACHMENT ID: 12691486 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12399//console This message is automatically generated. HBase Shell docs for scan command don't reference VERSIONS -- Key: HBASE-8026 URL: https://issues.apache.org/jira/browse/HBASE-8026 Project: HBase Issue Type: Bug Reporter: Jonathan Natkins Assignee: Amit Kabra Labels: beginner Fix For: 0.98.8 Attachments: HBASE-8026.patch hbase(main):046:0 help 'scan' Scan a table; pass table name and optionally a dictionary of scanner specifications. Scanner specifications may include one or more of: TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH, or COLUMNS, CACHE VERSIONS should be mentioned somewhere here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12798) Map Reduce jobs should not create Tables in setConf()
[ https://issues.apache.org/jira/browse/HBASE-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272565#comment-14272565 ] Hadoop QA commented on HBASE-12798: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12691479/12798-3.patch against master branch at commit 988cba762a6d99b4b512ff97a607891f4b82f7dc. ATTACHMENT ID: 12691479 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12400//console This message is automatically generated. Map Reduce jobs should not create Tables in setConf() - Key: HBASE-12798 URL: https://issues.apache.org/jira/browse/HBASE-12798 Project: HBase Issue Type: Bug Reporter: Solomon Duskis Assignee: Solomon Duskis Fix For: 1.0.0, 2.0.0 Attachments: 12798-3.patch, HBASE-12798-2.patch, HBASE-12798.patch setConf() gets called in many places along the Map/Reduce chain. HBase creates Tables and etc in setConf in a few places. There should be a better place to create the Table, often while creating a TableRecordReader. This issue will tackle the following classes: - TableOutputFormatBase - TableInputFormatBase - TableInputFormat Other classes that could use a look over: - HRegionPartitioner - ReplicationLogCleaner -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8329) Limit compaction speed
[ https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272514#comment-14272514 ] Ted Yu commented on HBASE-8329: --- {code} +@InterfaceAudience.Private +public interface ThroughputController { {code} LimitedPrivate annotation should be used - same with the controllers which implement the interface. Please add javadoc for each method. {code} + void cancelupThroughputTuner(); {code} Remove 'up' in the name of the above method. Limit compaction speed -- Key: HBASE-8329 URL: https://issues.apache.org/jira/browse/HBASE-8329 Project: HBase Issue Type: Improvement Components: Compaction Reporter: binlijin Assignee: zhangduo Fix For: 2.0.0, 1.1.0 Attachments: HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch There is no speed or resource limit for compaction,I think we should add this feature especially when request burst. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-8329) Limit compaction speed
[ https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272514#comment-14272514 ] Ted Yu edited comment on HBASE-8329 at 1/10/15 2:21 PM: {code} +@InterfaceAudience.Private +public interface ThroughputController { {code} LimitedPrivate annotation should be used - same with the controllers which implement the interface. Please add javadoc for each method. {code} + void cancelupThroughputTuner(); {code} Remove 'up' in the name of the above method. {code} + public void cancelupThroughputTuner() { +stop(Cancel Throughput Tuner); {code} Since cancel method calls stop(), is cancelThroughputTuner() needed ? was (Author: yuzhih...@gmail.com): {code} +@InterfaceAudience.Private +public interface ThroughputController { {code} LimitedPrivate annotation should be used - same with the controllers which implement the interface. Please add javadoc for each method. {code} + void cancelupThroughputTuner(); {code} Remove 'up' in the name of the above method. Limit compaction speed -- Key: HBASE-8329 URL: https://issues.apache.org/jira/browse/HBASE-8329 Project: HBase Issue Type: Improvement Components: Compaction Reporter: binlijin Assignee: zhangduo Fix For: 2.0.0, 1.1.0 Attachments: HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch There is no speed or resource limit for compaction,I think we should add this feature especially when request burst. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask
[ https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-12782: -- Attachment: 12782.unit.test.writing.txt Focusing on write side first. Debugging, the emission on end of verify step is of no use. I find that I have to go into the reduce logging to find these log lines from ITBLL: LOG.error(Linked List error: Key = + keyString + References = + refsSb.toString()); I then take the 'References' record, do a get on it. It is the 'meta:previous' that is 'missing'. This missing record will have been 'written' as part of the previous 1M writes at 'count' - 1M. The time on this record will be a timestamp that is '1M' ahead of when the 'missing' record would have been written (usually about 15seconds per 1M but if server down, can be minutes writing the 1M). The ITBLL rows have too many unprintable characters -- quotes, single ticks, left braces, etc. -- to make for easy scripting. Tried but its kinda tough bridging 'text' output -- escaped bytes -- jruby and java. Spent some time trying to write rows with printable records but seems to make for more failures; need to spend time on this... as is its hard to script ITBLL failures so can get a 'bigger picture' on failure profile. Another issue. I've disabled killing master and splits to make things easier for myself. We still fail reliably. I can triangulate a little looking at a few failed records and have identified suspicious-looking write periods as asyncprocess tries to cross over a failed regionserver. The attached test reproduces the same logging sequence in a unit test (was trying to narrow the moving parts around a failure) that I see up in cluster but it looks like the asyncprocess is not the issue; it's accounting doesn't seem to be hiccuping. Let me redo this test as an integrationtest to run against the cluster to be sure -- perhaps it a timing thing hard to repro in the one JVM -- but it doesn't look like write side is the issue. Dang. ITBLL fails for me if generator does anything but 5M per maptask Key: HBASE-12782 URL: https://issues.apache.org/jira/browse/HBASE-12782 Project: HBase Issue Type: Bug Components: integration tests Affects Versions: 1.0.0 Reporter: stack Priority: Critical Fix For: 1.0.0 Attachments: 12782.unit.test.writing.txt Anyone else seeing this? If I do an ITBLL with generator doing 5M rows per maptask, all is good -- verify passes. I've been running 5 servers and had one splot per server. So below works: HADOOP_CLASSPATH=/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase classpath` ./hadoop/bin/hadoop --config ~/conf_hadoop org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey serverKilling Generator 5 500 g1.tmp or if I double the map tasks, it works: HADOOP_CLASSPATH=/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase classpath` ./hadoop/bin/hadoop --config ~/conf_hadoop org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey serverKilling Generator 10 500 g2.tmp ...but if I change the 5M to 50M or 25M, Verify fails. Looking into it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12798) Map Reduce jobs should not create Tables in setConf()
[ https://issues.apache.org/jira/browse/HBASE-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272710#comment-14272710 ] Ted Yu commented on HBASE-12798: Integrated to master branch. There're several rejections in TableInputFormatBase.java for branch-1 Map Reduce jobs should not create Tables in setConf() - Key: HBASE-12798 URL: https://issues.apache.org/jira/browse/HBASE-12798 Project: HBase Issue Type: Bug Reporter: Solomon Duskis Assignee: Solomon Duskis Fix For: 1.0.0, 2.0.0 Attachments: 12798-3.patch, HBASE-12798-2.patch, HBASE-12798.patch setConf() gets called in many places along the Map/Reduce chain. HBase creates Tables and etc in setConf in a few places. There should be a better place to create the Table, often while creating a TableRecordReader. This issue will tackle the following classes: - TableOutputFormatBase - TableInputFormatBase - TableInputFormat Other classes that could use a look over: - HRegionPartitioner - ReplicationLogCleaner -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12825) CallRunner exception messages should include destination host:port
[ https://issues.apache.org/jira/browse/HBASE-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272648#comment-14272648 ] stack commented on HBASE-12825: --- The failing test is expecting an explicity exception message; the patch changed the message to interject server name which breaks the compare. CallRunner exception messages should include destination host:port -- Key: HBASE-12825 URL: https://issues.apache.org/jira/browse/HBASE-12825 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Nick Dimiduk Assignee: Nick Dimiduk Priority: Minor Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0 Attachments: HBASE-12825.00.patch Noticed while debugging some IT failure. Would be nice to know who we we re trying to talk to. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable
[ https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272726#comment-14272726 ] stack commented on HBASE-7541: -- Patch looks great [~jonathan.lawlor] That test failure related? Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable Key: HBASE-7541 URL: https://issues.apache.org/jira/browse/HBASE-7541 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Jonathan Lawlor Attachments: HBASE7541_patch_v1.txt Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} should disappear and not come back. There's about 25 different places in the code that rely on it that need to be changed the same way I changed TestReplication. Perfect for someone that wants to get started with HBase dev :) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12798) Map Reduce jobs should not create Tables in setConf()
[ https://issues.apache.org/jira/browse/HBASE-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272737#comment-14272737 ] Hudson commented on HBASE-12798: FAILURE: Integrated in HBase-TRUNK #6010 (See [https://builds.apache.org/job/HBase-TRUNK/6010/]) HBASE-12798 Map Reduce jobs should not create Tables in setConf() (Solomon Duskis) (tedyu: rev f6a017ce6302853ef8421dc6adf1a099059f4e30) * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormat.java * hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java Map Reduce jobs should not create Tables in setConf() - Key: HBASE-12798 URL: https://issues.apache.org/jira/browse/HBASE-12798 Project: HBase Issue Type: Bug Reporter: Solomon Duskis Assignee: Solomon Duskis Fix For: 1.0.0, 2.0.0 Attachments: 12798-3.patch, HBASE-12798-2.patch, HBASE-12798.patch setConf() gets called in many places along the Map/Reduce chain. HBase creates Tables and etc in setConf in a few places. There should be a better place to create the Table, often while creating a TableRecordReader. This issue will tackle the following classes: - TableOutputFormatBase - TableInputFormatBase - TableInputFormat Other classes that could use a look over: - HRegionPartitioner - ReplicationLogCleaner -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask
[ https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272799#comment-14272799 ] stack commented on HBASE-12782: --- Let me write a WAL searching tool for the missing keys. If in WAL, server lost them (i'll have seqid from WAL which will help reason how server-side dropped the edits). If not in WAL, it is back to asyncprocess. ITBLL fails for me if generator does anything but 5M per maptask Key: HBASE-12782 URL: https://issues.apache.org/jira/browse/HBASE-12782 Project: HBase Issue Type: Bug Components: integration tests Affects Versions: 1.0.0 Reporter: stack Priority: Critical Fix For: 1.0.0 Attachments: 12782.unit.test.and.it.test.txt, 12782.unit.test.writing.txt Anyone else seeing this? If I do an ITBLL with generator doing 5M rows per maptask, all is good -- verify passes. I've been running 5 servers and had one splot per server. So below works: HADOOP_CLASSPATH=/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase classpath` ./hadoop/bin/hadoop --config ~/conf_hadoop org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey serverKilling Generator 5 500 g1.tmp or if I double the map tasks, it works: HADOOP_CLASSPATH=/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase classpath` ./hadoop/bin/hadoop --config ~/conf_hadoop org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey serverKilling Generator 10 500 g2.tmp ...but if I change the 5M to 50M or 25M, Verify fails. Looking into it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask
[ https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-12782: -- Attachment: 12782.unit.test.and.it.test.txt Added IT version and ran it against the cluster. Fails if numbers are large -- 100M. ITBLL fails for me if generator does anything but 5M per maptask Key: HBASE-12782 URL: https://issues.apache.org/jira/browse/HBASE-12782 Project: HBase Issue Type: Bug Components: integration tests Affects Versions: 1.0.0 Reporter: stack Priority: Critical Fix For: 1.0.0 Attachments: 12782.unit.test.and.it.test.txt, 12782.unit.test.writing.txt Anyone else seeing this? If I do an ITBLL with generator doing 5M rows per maptask, all is good -- verify passes. I've been running 5 servers and had one splot per server. So below works: HADOOP_CLASSPATH=/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase classpath` ./hadoop/bin/hadoop --config ~/conf_hadoop org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey serverKilling Generator 5 500 g1.tmp or if I double the map tasks, it works: HADOOP_CLASSPATH=/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase classpath` ./hadoop/bin/hadoop --config ~/conf_hadoop org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey serverKilling Generator 10 500 g2.tmp ...but if I change the 5M to 50M or 25M, Verify fails. Looking into it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12833) [shell] table.rb leaks connections
[ https://issues.apache.org/jira/browse/HBASE-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272766#comment-14272766 ] Solomon Duskis commented on HBASE-12833: hm... Interesting ripple affect. I'll see if I can figure this one out. [shell] table.rb leaks connections -- Key: HBASE-12833 URL: https://issues.apache.org/jira/browse/HBASE-12833 Project: HBase Issue Type: Bug Components: shell Affects Versions: 1.0.0, 2.0.0, 1.1.0 Reporter: Nick Dimiduk Fix For: 1.0.0, 2.0.0, 1.1.0 TestShell is erring out (timeout) consistently for me. Culprit is OOM cannot create native thread. It looks to me like test_table.rb and hbase/table.rb are made for leaking connections. table calls ConnectionFactory.createConnection() for every table but provides no close() method to clean it up. test_table creates a new table with every test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS
[ https://issues.apache.org/jira/browse/HBASE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272852#comment-14272852 ] Amit Kabra commented on HBASE-8026: --- Manual steps : 1) Following works : scan 't1', {RAW = true, VERSIONS = 10} 2) help 'scan' shows both RAW and VERSIONS info. HBase Shell docs for scan command don't reference VERSIONS -- Key: HBASE-8026 URL: https://issues.apache.org/jira/browse/HBASE-8026 Project: HBase Issue Type: Bug Reporter: Jonathan Natkins Assignee: Amit Kabra Labels: beginner Fix For: 0.98.8 Attachments: HBASE-8026.patch hbase(main):046:0 help 'scan' Scan a table; pass table name and optionally a dictionary of scanner specifications. Scanner specifications may include one or more of: TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH, or COLUMNS, CACHE VERSIONS should be mentioned somewhere here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)