[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299704#comment-14299704
 ] 

stack commented on HBASE-12782:
---

Hmm. No the 500M test actually passed:

{code}
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Verify$Counts
REFERENCED=5
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=96
...

{code}

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299698#comment-14299698
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #787 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/787/])
HBASE-12782 [0.98] Addendum fixes variable name (tedyu: rev 
fd498f18548b5dd37f45c2f3cdc46b11b11ecbd6)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12931) The existing KeyValues in memstroe are not removed completely after inserting cell into memStore

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299693#comment-14299693
 ] 

Hadoop QA commented on HBASE-12931:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695728/HBASE-12931.patch
  against master branch at commit 825871431ec48036fd3e3cd9625c451b50cbe308.
  ATTACHMENT ID: 12695728

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12660//console

This message is automatically generated.

> The existing KeyValues in memstroe are not removed completely after inserting 
> cell into memStore 
> -
>
> Key: HBASE-12931
> URL: https://issues.apache.org/jira/browse/HBASE-12931
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-12931.patch
>
>
> If I'm not wrong, the UPSERT method of memStore should remove all existing 
> KeyValues except the newer version.
> In memStore,
> {code:title=DefaultMemStore.java|borderStyle=solid}
> int versIionsVisible = 0;
> ...
> if (cur.getTypeByte() == KeyValue.Type.Put.getCode() &&
> cur.getSequenceId() <= readpoint) {
>   if (versionsVisible > 1) {
> // if we get here we have seen at least one version visible to 
> the oldest scanner,
> // which means we can prove that no scanner will see this version
> // false means there was a change, so give us the size.
> long delta = heapSizeChange(cur, true);
> addedSize -= delta;
> this.size.addAndGet(-delta);
> it.remove();
> setOlde

[jira] [Commented] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-01-30 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299691#comment-14299691
 ] 

Jerry He commented on HBASE-12949:
--

Thanks, [~stack]

I will look into your suggestion.

> Scanner can be stuck in infinite loop if the HFile is corrupted
> ---
>
> Key: HBASE-12949
> URL: https://issues.apache.org/jira/browse/HBASE-12949
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.98.10
>Reporter: Jerry He
>
> We've encountered problem where compaction hangs and never completes.
> After looking into it further, we found that the compaction scanner was stuck 
> in a infinite loop. See stack below.
> {noformat}
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
> {noformat}
> We identified the hfile that seems to be corrupted.  Using HFile tool shows 
> the following:
> {noformat}
> [biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k 
> -m -f 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> 15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
> org.apache.hadoop.util.PureJavaCrc32
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
> org.apache.hadoop.util.PureJavaCrc32C
> 15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Scanning -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> WARNING, previous row is greater then current row
> filename -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> previous -> 
> \x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
> current  ->
> Exception in thread "main" java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:489)
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
> at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
> {noformat}
> Turning on Java Assert shows the following:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Key 
> 20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
>  followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
> {noformat}
> It shows that the hfile seems to be corrupted -- the keys don't seem to be 
> right.
> But Scanner is not able to give a meaningful error, but stuck in an infinite 
> loop in here:
> {code}
> KeyValueHeap.generalizedSeek()
> while ((scanner = heap.poll()) != null) {
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299685#comment-14299685
 ] 

hongyu bi commented on HBASE-12948:
---

 thanks ted:)

> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: 12948-v2.patch, HBASE-12948-0.99.2-v1.patch, 
> HBASE-12948-v0.patch, HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
> the same mvccVersion ,so this case happen.
> Fix:
> according to the put/delete#add on the same column behaviour ,
> fix from server side: process "last edit wins on the same column" inside 
> HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
> result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299684#comment-14299684
 ] 

Hadoop QA commented on HBASE-5954:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695726/5954-v6-trunk.txt
  against master branch at commit 825871431ec48036fd3e3cd9625c451b50cbe308.
  ATTACHMENT ID: 12695726

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:red}-1 javac{color}.  The applied patch generated 112 javac compiler 
warnings (more than the master's current 111 warnings).

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1938 checkstyle errors (more than the master's current 1936 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/patchReleaseAuditWarnings.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12659//console

This message is automatically generated.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, 5954-v6-trunk.txt, 
> 5954-v6-trunk.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299679#comment-14299679
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-TRUNK #6075 (See 
[https://builds.apache.org/job/HBase-TRUNK/6075/])
HBASE-12782 ITBLL fails for me if generator does anything but 5M per maptask 
(stack: rev 825871431ec48036fd3e3cd9625c451b50cbe308)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* hbase-server/src/test/data/0016310
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299674#comment-14299674
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-1.0 #700 (See 
[https://builds.apache.org/job/HBase-1.0/700/])
HBASE-12782 ITBLL fails for me if generator does anything but 5M per maptask 
(stack: rev fbbbf7e6d809c5c83c042ec5741a3d9b2fd712c6)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* hbase-server/src/test/data/0016310
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299669#comment-14299669
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-1.1 #128 (See 
[https://builds.apache.org/job/HBase-1.1/128/])
HBASE-12782 ITBLL fails for me if generator does anything but 5M per maptask 
(stack: rev e06be2060cf8449a732c7a8b024d6fcf0c5e3ef6)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* hbase-server/src/test/data/0016310
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
* hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12931) The existing KeyValues in memstroe are not removed completely after inserting cell into memStore

2015-01-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12931:
---
Status: Patch Available  (was: Open)

> The existing KeyValues in memstroe are not removed completely after inserting 
> cell into memStore 
> -
>
> Key: HBASE-12931
> URL: https://issues.apache.org/jira/browse/HBASE-12931
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-12931.patch
>
>
> If I'm not wrong, the UPSERT method of memStore should remove all existing 
> KeyValues except the newer version.
> In memStore,
> {code:title=DefaultMemStore.java|borderStyle=solid}
> int versIionsVisible = 0;
> ...
> if (cur.getTypeByte() == KeyValue.Type.Put.getCode() &&
> cur.getSequenceId() <= readpoint) {
>   if (versionsVisible > 1) {
> // if we get here we have seen at least one version visible to 
> the oldest scanner,
> // which means we can prove that no scanner will see this version
> // false means there was a change, so give us the size.
> long delta = heapSizeChange(cur, true);
> addedSize -= delta;
> this.size.addAndGet(-delta);
> it.remove();
> setOldestEditTimeToNow();
>   } else {
> versionsVisible++;
>   }
> {code}
> Does "versionsVisible > 1" should be changed to "versionsVisible >= 1" ?
> thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299662#comment-14299662
 ] 

Ted Yu commented on HBASE-12948:


{code}
python dev-support/findHangingTests.py 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658/console
Fetching the console output from the URL
Printing hanging tests
Printing Failing tests
Failing test : org.apache.hadoop.hbase.client.TestHCM
Failing test : org.apache.hadoop.hbase.client.TestMetaWithReplicas
{code}
Test failures were not related to patch.

[~apurtell], [~enis]:
Can you take a look ?

> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: 12948-v2.patch, HBASE-12948-0.99.2-v1.patch, 
> HBASE-12948-v0.patch, HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
> the same mvccVersion ,so this case happen.
> Fix:
> according to the put/delete#add on the same column behaviour ,
> fix from server side: process "last edit wins on the same column" inside 
> HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
> result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12931) The existing KeyValues in memstroe are not removed completely after inserting cell into memStore

2015-01-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12931:
--
Attachment: HBASE-12931.patch

> The existing KeyValues in memstroe are not removed completely after inserting 
> cell into memStore 
> -
>
> Key: HBASE-12931
> URL: https://issues.apache.org/jira/browse/HBASE-12931
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-12931.patch
>
>
> If I'm not wrong, the UPSERT method of memStore should remove all existing 
> KeyValues except the newer version.
> In memStore,
> {code:title=DefaultMemStore.java|borderStyle=solid}
> int versIionsVisible = 0;
> ...
> if (cur.getTypeByte() == KeyValue.Type.Put.getCode() &&
> cur.getSequenceId() <= readpoint) {
>   if (versionsVisible > 1) {
> // if we get here we have seen at least one version visible to 
> the oldest scanner,
> // which means we can prove that no scanner will see this version
> // false means there was a change, so give us the size.
> long delta = heapSizeChange(cur, true);
> addedSize -= delta;
> this.size.addAndGet(-delta);
> it.remove();
> setOldestEditTimeToNow();
>   } else {
> versionsVisible++;
>   }
> {code}
> Does "versionsVisible > 1" should be changed to "versionsVisible >= 1" ?
> thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12931) The existing KeyValues in memstroe are not removed completely after inserting cell into memStore

2015-01-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12931:
--
Attachment: HBASE-12931

> The existing KeyValues in memstroe are not removed completely after inserting 
> cell into memStore 
> -
>
> Key: HBASE-12931
> URL: https://issues.apache.org/jira/browse/HBASE-12931
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
>
> If I'm not wrong, the UPSERT method of memStore should remove all existing 
> KeyValues except the newer version.
> In memStore,
> {code:title=DefaultMemStore.java|borderStyle=solid}
> int versIionsVisible = 0;
> ...
> if (cur.getTypeByte() == KeyValue.Type.Put.getCode() &&
> cur.getSequenceId() <= readpoint) {
>   if (versionsVisible > 1) {
> // if we get here we have seen at least one version visible to 
> the oldest scanner,
> // which means we can prove that no scanner will see this version
> // false means there was a change, so give us the size.
> long delta = heapSizeChange(cur, true);
> addedSize -= delta;
> this.size.addAndGet(-delta);
> it.remove();
> setOldestEditTimeToNow();
>   } else {
> versionsVisible++;
>   }
> {code}
> Does "versionsVisible > 1" should be changed to "versionsVisible >= 1" ?
> thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12931) The existing KeyValues in memstroe are not removed completely after inserting cell into memStore

2015-01-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12931:
--
Attachment: (was: HBASE-12931)

> The existing KeyValues in memstroe are not removed completely after inserting 
> cell into memStore 
> -
>
> Key: HBASE-12931
> URL: https://issues.apache.org/jira/browse/HBASE-12931
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
>
> If I'm not wrong, the UPSERT method of memStore should remove all existing 
> KeyValues except the newer version.
> In memStore,
> {code:title=DefaultMemStore.java|borderStyle=solid}
> int versIionsVisible = 0;
> ...
> if (cur.getTypeByte() == KeyValue.Type.Put.getCode() &&
> cur.getSequenceId() <= readpoint) {
>   if (versionsVisible > 1) {
> // if we get here we have seen at least one version visible to 
> the oldest scanner,
> // which means we can prove that no scanner will see this version
> // false means there was a change, so give us the size.
> long delta = heapSizeChange(cur, true);
> addedSize -= delta;
> this.size.addAndGet(-delta);
> it.remove();
> setOldestEditTimeToNow();
>   } else {
> versionsVisible++;
>   }
> {code}
> Does "versionsVisible > 1" should be changed to "versionsVisible >= 1" ?
> thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299656#comment-14299656
 ] 

Hadoop QA commented on HBASE-12948:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695716/12948-v2.patch
  against master branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695716

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1940 checkstyle errors (more than the master's current 1939 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12658//console

This message is automatically generated.

> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: 12948-v2.patch, HBASE-12948-0.99.2-v1.patch, 
> HBASE-12948-v0.patch, HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQu

[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299645#comment-14299645
 ] 

Lars Hofhansl commented on HBASE-12782:
---

You the man, [~stack]!

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299644#comment-14299644
 ] 

Lars Hofhansl commented on HBASE-5954:
--

Passed locally a few time with patch applied. Hung once *without* patch 
applied. Same memory profile with and without the patch. Same amount of time 
spent with and without patch. Unrelated I think. I'll get some more runs in.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, 5954-v6-trunk.txt, 
> 5954-v6-trunk.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5954:
-
Attachment: 5954-v6-trunk.txt

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, 5954-v6-trunk.txt, 
> 5954-v6-trunk.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to 0.98, branch-1, branch-1.0, and to master branches. Thanks for 
reviews lads.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299632#comment-14299632
 ] 

stack commented on HBASE-12782:
---

This fix helps. The 125M and 250M tests pass where before they always failed. 
Looks like the 500M failed so some more stuff to fix it seems. Will open new 
issue for that.

[~jeffreyz] I should turn on DLR. I think I have tooling to figure any 
dataloss.  Should use it while it fresh.

We need to have IT tests running with regularity (stating the obvious).  Might 
have caught this bug at commit-time.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299620#comment-14299620
 ] 

Lars Hofhansl commented on HBASE-5954:
--

This time TestAcidGuarantees did not finish.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, 5954-v6-trunk.txt, 
> hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299614#comment-14299614
 ] 

stack commented on HBASE-12782:
---

[~yuzhih...@gmail.com] Usually the fellow who fucked it up, is the one who has 
to fix it.. you could just leave it for them to fix their mess. But thanks.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299610#comment-14299610
 ] 

Hadoop QA commented on HBASE-5954:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695691/5954-v6-trunk.txt
  against master branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695691

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:red}-1 javac{color}.  The applied patch generated 112 javac compiler 
warnings (more than the master's current 111 warnings).

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1941 checkstyle errors (more than the master's current 1939 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12655//console

This message is automatically generated.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, 5954-v6-trunk.txt, 
> hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299607#comment-14299607
 ] 

Hadoop QA commented on HBASE-12782:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695694/12782v4.txt
  against master branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695694

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  protected void setup(Reducer.Context context)
+  protected void cleanup(Reducer.Context context)

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestRecoveredEdits

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12654//console

This message is automatically generated.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKil

[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299589#comment-14299589
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-0.98 #828 (See 
[https://builds.apache.org/job/HBase-0.98/828/])
HBASE-12782 [0.98] Addendum fixes variable name (tedyu: rev 
fd498f18548b5dd37f45c2f3cdc46b11b11ecbd6)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12948:
---
Attachment: 12948-v2.patch

Patch v2 is rebased on master branch.

Close HTable at the end of the test.

> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: 12948-v2.patch, HBASE-12948-0.99.2-v1.patch, 
> HBASE-12948-v0.patch, HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
> the same mvccVersion ,so this case happen.
> Fix:
> according to the put/delete#add on the same column behaviour ,
> fix from server side: process "last edit wins on the same column" inside 
> HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
> result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299565#comment-14299565
 ] 

Hadoop QA commented on HBASE-12782:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12695713/12782-0.98-addendum.txt
  against 0.98 branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695713

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12657//console

This message is automatically generated.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10942) support parallel request cancellation for multi-get

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299562#comment-14299562
 ] 

Hadoop QA commented on HBASE-10942:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695655/10942-1.1.txt
  against master branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695655

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12653//console

This message is automatically generated.

> support parallel request cancellation for multi-get
> ---
>
> Key: HBASE-10942
> URL: https://issues.apache.org/jira/browse/HBASE-10942
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Nicolas Liochon
> Fix For: hbase-10070
>
> Attachments: 10942-1.1.txt, 10942-for-98.zip, 10942.patch, 
> HBASE-10942.01.patch, HBASE-10942.02.patch, HBASE-10942.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12782:
---
Attachment: 12782-0.98-addendum.txt

Addendum fixes 0.98 build

Hopefully I get this right.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782-0.98-addendum.txt, 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299532#comment-14299532
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-0.98 #827 (See 
[https://builds.apache.org/job/HBase-0.98/827/])
HBASE-12782 ITBLL fails for me if generator does anything but 5M per maptask 
(stack: rev 1bb55d86ecbb4523e5ac5f08dc7ad7b2fbec68ac)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299522#comment-14299522
 ] 

Hadoop QA commented on HBASE-12948:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12695698/HBASE-12948-0.99.2-v1.patch
  against master branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695698

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12656//console

This message is automatically generated.

> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: HBASE-12948-0.99.2-v1.patch, HBASE-12948-v0.patch, 
> HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
> the same mvccVersion ,so this case happen.
> Fix:
> according to the put/delete#add on the same column behaviour ,
> fix from server side: process "last edit wins on the same column" inside 
> HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
> result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299521#comment-14299521
 ] 

hongyu bi commented on HBASE-12948:
---

ut added;
I need to compare the cq between current and the next edit to decide whether 
push idx or not, so i used index loop




> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: HBASE-12948-0.99.2-v1.patch, HBASE-12948-v0.patch, 
> HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
> the same mvccVersion ,so this case happen.
> Fix:
> according to the put/delete#add on the same column behaviour ,
> fix from server side: process "last edit wins on the same column" inside 
> HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
> result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299520#comment-14299520
 ] 

Hudson commented on HBASE-12782:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #786 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/786/])
HBASE-12782 ITBLL fails for me if generator does anything but 5M per maptask 
(stack: rev 1bb55d86ecbb4523e5ac5f08dc7ad7b2fbec68ac)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12948) Increment#addColumn on the same column multi times produce wrong result

2015-01-30 Thread hongyu bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongyu bi updated HBASE-12948:
--
Attachment: HBASE-12948-0.99.2-v1.patch

> Increment#addColumn on the same column multi times produce wrong result 
> 
>
> Key: HBASE-12948
> URL: https://issues.apache.org/jira/browse/HBASE-12948
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Reporter: hongyu bi
>Priority: Critical
> Attachments: HBASE-12948-0.99.2-v1.patch, HBASE-12948-v0.patch, 
> HBASE-12948.patch
>
>
> Case:
> Initially get('row1'):
> rowkey=row1 value=1
> run:
> Increment increment = new Increment(Bytes.toBytes("row1"));
> for (int i = 0; i < N; i++) {
> increment.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("c"), 1)
> }
> hobi.increment(increment);
> get('row1'):
> if N=1 then result is 2 else if N>1 the result will always be 1
> Cause:
> https://issues.apache.org/jira/browse/HBASE-7114 let increment extent 
> mutation which change familyMap from NavigableMap to List, so from client 
> side, we can buffer many edits on the same column;
> However, HRegion#increment use idx to iterate the get's results, here 
> results.size1,so the latter edits on the same 
> column won't match the condition {idx < results.size() && 
> CellUtil.matchingQualifier(results.get(idx), kv) }, meantime the edits share 
> the same mvccVersion ,so this case happen.
> Fix:
> according to the put/delete#add on the same column behaviour ,
> fix from server side: process "last edit wins on the same column" inside 
> HRegion#increment to maintenance  HBASE-7114's extension and keep the same 
> result from 0.94.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299512#comment-14299512
 ] 

stack commented on HBASE-12782:
---

TestRecoveredEdits failed because the data file is not in place. Might have to 
commit that separately.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299507#comment-14299507
 ] 

stack commented on HBASE-12782:
---

Pushed the fix-only to 0.98 as 1bb55d8.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Attachment: 12782v4.txt

Had default filter return false in WALPlayer rather than true to include.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt, 12782v4.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12950) Extend the truncate command to handle region ranges and not just the whole table

2015-01-30 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez reassigned HBASE-12950:
-

Assignee: Esteban Gutierrez

> Extend the truncate command to handle region ranges and not just the whole 
> table
> 
>
> Key: HBASE-12950
> URL: https://issues.apache.org/jira/browse/HBASE-12950
> Project: HBase
>  Issue Type: New Feature
>  Components: Region Assignment, regionserver, shell
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>
> We have seen many times during the last few years that when key prefixes are 
> time based and the access pattern only consists of writes to recent KVs we 
> can end up with tens of thousands of regions and some of those regions will 
> not be longer used. Even if users use TTLs and data is eventually deleted we 
> still have the old regions around and only performing an online merge can 
> help to reduce the excess of regions. Extending the truncate command to 
> handle also region ranges can help user that experience this issue to trim 
> the old regions if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5954:
-
Attachment: 5954-v6-trunk.txt

v6 has a very basic unittest, verifying that the metrics are updated correctly

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, 5954-v6-trunk.txt, 
> hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12950) Extend the truncate command to handle region ranges and not just the whole table

2015-01-30 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HBASE-12950:
-

 Summary: Extend the truncate command to handle region ranges and 
not just the whole table
 Key: HBASE-12950
 URL: https://issues.apache.org/jira/browse/HBASE-12950
 Project: HBase
  Issue Type: New Feature
  Components: Region Assignment, regionserver, shell
Affects Versions: 2.0.0
Reporter: Esteban Gutierrez


We have seen many times during the last few years that when key prefixes are 
time based and the access pattern only consists of writes to recent KVs we can 
end up with tens of thousands of regions and some of those regions will not be 
longer used. Even if users use TTLs and data is eventually deleted we still 
have the old regions around and only performing an online merge can help to 
reduce the excess of regions. Extending the truncate command to handle also 
region ranges can help user that experience this issue to trim the old regions 
if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299489#comment-14299489
 ] 

Hadoop QA commented on HBASE-12782:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695639/12782v3.txt
  against master branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695639

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  protected void setup(Reducer.Context context)
+  protected void cleanup(Reducer.Context context)

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestRecoveredEdits
  org.apache.hadoop.hbase.mapreduce.TestWALPlayer

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12652//console

This message is automatically generated.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLink

[jira] [Commented] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299464#comment-14299464
 ] 

stack commented on HBASE-12949:
---

A KV w/ minimum is 'wrong', yeah.

A corrupt hfile is going to happen.  We should deal with it. A few basic checks 
on values read in before we go to allocate memory, etc., throwing exception if 
obviously bad would be the way to go.

Would be good to avoid one bad file bringing down the whole cluster so should 
throw a particular exception, CorruptionException, then high up in the store it 
should loudly close the file when it gets one of these.

Something like that?

Good one [~jerryhe]

> Scanner can be stuck in infinite loop if the HFile is corrupted
> ---
>
> Key: HBASE-12949
> URL: https://issues.apache.org/jira/browse/HBASE-12949
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.98.10
>Reporter: Jerry He
>
> We've encountered problem where compaction hangs and never completes.
> After looking into it further, we found that the compaction scanner was stuck 
> in a infinite loop. See stack below.
> {noformat}
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
> {noformat}
> We identified the hfile that seems to be corrupted.  Using HFile tool shows 
> the following:
> {noformat}
> [biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k 
> -m -f 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> 15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
> org.apache.hadoop.util.PureJavaCrc32
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
> org.apache.hadoop.util.PureJavaCrc32C
> 15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Scanning -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> WARNING, previous row is greater then current row
> filename -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> previous -> 
> \x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
> current  ->
> Exception in thread "main" java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:489)
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
> at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
> {noformat}
> Turning on Java Assert shows the following:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Key 
> 20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
>  followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
> {noformat}
> It shows that the hfile seems to be corrupted -- the keys don't seem to be 
> right.
> But Scanner is not able to give a meaningful error, but stuck in an infinite 
> loop in here:
> {code}
> KeyValueHeap.generalizedSeek()
> while ((scanner = heap.poll()) != null) {
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6778) Deprecate Chore; its a thread per task when we should have one thread to do all tasks

2015-01-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299459#comment-14299459
 ] 

Hudson commented on HBASE-6778:
---

FAILURE: Integrated in HBase-1.1 #127 (See 
[https://builds.apache.org/job/HBase-1.1/127/])
HBASE-6778 Deprecate Chore; its a thread per task when we should have one 
thread to do all tasks (Jonathan Lawlor) (stack: rev 
af84b746ceab1e4e6ed8a37ce8f1f4546ad3df5c)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSyncUp.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/HealthCheckChore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/ClusterStatusChore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ServerNonceManager.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/Chore.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTrackerZKImpl.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/TestChoreService.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileCleaner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateZKImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/ScheduledChore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerNonceManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConnectionCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitLogWorker.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/ChoreService.java
* hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServlet.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BalancerChore.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestTableLockManager.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/Server.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StorefileRefresherChore.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/MockServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/backup/TestHFileArchiving.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/AuthUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileLinkCleaner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/backup/example/TestZooKeeperTableArchiveClient.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> Deprecate Chore; its a thread per task when we should have one thread to do 
> all tasks
> -
>
> Key: HBASE-6778
> URL: https://issues.apache.org/jira/browse/HBASE-6778
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
> Fix For: 2.0.0, 1.1.0
>
> Attachments: AFTER_thread_dump.txt, BEFORE_thread_dump.txt

[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299445#comment-14299445
 ] 

stack commented on HBASE-5954:
--

I took a look at that patch. LGTM.  This stuff is hard to review. It is tricky. 
 Proof is in the pudding.  For sure the failure is unrelated? Its in the WAL?  
I'm +1 on commit.  Suggest try hadoopqa a few more times before commit.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6778) Deprecate Chore; its a thread per task when we should have one thread to do all tasks

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299442#comment-14299442
 ] 

Hadoop QA commented on HBASE-6778:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12695629/HBASE-6778-branch-1-v1.patch
  against branch-1 branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695629

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 89 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
12 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12651//console

This message is automatically generated.

> Deprecate Chore; its a thread per task when we should have one thread to do 
> all tasks
> -
>
> Key: HBASE-6778
> URL: https://issues.apache.org/jira/browse/HBASE-6778
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
> Fix For: 2.0.0, 1.1.0
>
> Attachments: AFTER_thread_dump.txt, BEFORE_thread_dump.txt, 
> HBASE-6778-branch-1-v1.patch, HBASE_6778_WIP_v1.patch, 
> HBASE_6778_WIP_v2.patch, HBASE_6778_v1.patch, HBASE_6778_v2.patch, 
> HBASE_6778_v3.patch, HBASE_6778_v3.patch, HBASE_6778_v4.patch, 
> HBASE_6778_v5.patch, HBASE_6778_v6.patch, HBASE_6778_v6.patch, 
> thread_dump_HMaster.local.out
>
>
> Should use something like ScheduledThreadPoolExecutor instead (Elliott said 
> this first I think; J-D said something similar just now).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299438#comment-14299438
 ] 

stack commented on HBASE-5954:
--

What checkstyle, javadoc, and extra compiler warnings?

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299422#comment-14299422
 ] 

Stephen Yuan Jiang commented on HBASE-12070:


The javadoc warnings are pre-existing.  

In terms of Findbugs warning, comparing to the last successful run 
(https://builds.apache.org/job/PreCommit-HBASE-Build/12633/), it generates 4 
more warnings in client and 3 less warnings in server.  I checked the warnings 
and none of them close to the place of my changes. 

Therefore, the patch is good to commit.



> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299419#comment-14299419
 ] 

Lars Hofhansl commented on HBASE-12782:
---

+1 (both the fix-only 0.98 and the full patch)

[~apurtell], this might warrant to sink the RC. What do you think?

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-10942) support parallel request cancellation for multi-get

2015-01-30 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299409#comment-14299409
 ] 

Devaraj Das edited comment on HBASE-10942 at 1/31/15 12:08 AM:
---

Patch for master. This patch is basically the one that [~nkeywal] had submitted 
 sometime back 
(https://issues.apache.org/jira/secure/attachment/12664655/10942.patch), with a 
small change to do with using the Cancellable interface,  and a unit test 
addition.


was (Author: devaraj):
Patch for master. This patch is basically the one that [~nkeywal] had submitted 
 sometime back, with a small change to do with using the Cancellable interface, 
 and a unit test addition.

> support parallel request cancellation for multi-get
> ---
>
> Key: HBASE-10942
> URL: https://issues.apache.org/jira/browse/HBASE-10942
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Nicolas Liochon
> Fix For: hbase-10070
>
> Attachments: 10942-1.1.txt, 10942-for-98.zip, 10942.patch, 
> HBASE-10942.01.patch, HBASE-10942.02.patch, HBASE-10942.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10942) support parallel request cancellation for multi-get

2015-01-30 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10942:

Attachment: 10942-1.1.txt

Patch for master. This patch is basically the one that [~nkeywal] had submitted 
 sometime back, with a small change to do with using the Cancellable interface, 
 and a unit test addition.

> support parallel request cancellation for multi-get
> ---
>
> Key: HBASE-10942
> URL: https://issues.apache.org/jira/browse/HBASE-10942
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Nicolas Liochon
> Fix For: hbase-10070
>
> Attachments: 10942-1.1.txt, 10942-for-98.zip, 10942.patch, 
> HBASE-10942.01.patch, HBASE-10942.02.patch, HBASE-10942.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-01-30 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299402#comment-14299402
 ] 

Jerry He commented on HBASE-12949:
--

We can see this key 
"20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0"
 is probably bad already. It is not possible to have type 'Minimum' in real kv, 
correct?
The next kv "//0/Minimum/vlen=0/mvcc=0" is worse. 

The Java Asserts in the code would probably catch all these.  But that is not 
going to be turned on in production.
We are probably not able to take care or error out all corruption cases.
On the other hand, I wonder if any simple sanity checks can be done with 
minimal performance impact.

Any idea and comment is welcome.

> Scanner can be stuck in infinite loop if the HFile is corrupted
> ---
>
> Key: HBASE-12949
> URL: https://issues.apache.org/jira/browse/HBASE-12949
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.98.10
>Reporter: Jerry He
>
> We've encountered problem where compaction hangs and never completes.
> After looking into it further, we found that the compaction scanner was stuck 
> in a infinite loop. See stack below.
> {noformat}
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
> {noformat}
> We identified the hfile that seems to be corrupted.  Using HFile tool shows 
> the following:
> {noformat}
> [biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k 
> -m -f 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> 15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
> org.apache.hadoop.util.PureJavaCrc32
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
> org.apache.hadoop.util.PureJavaCrc32C
> 15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Scanning -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> WARNING, previous row is greater then current row
> filename -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> previous -> 
> \x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
> current  ->
> Exception in thread "main" java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:489)
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
> at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
> {noformat}
> Turning on Java Assert shows the following:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Key 
> 20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
>  followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
> {noformat}
> It shows that the hfile seems to be corrupted -- the keys don't seem to be 
> right.
> But Scanner is not able to give a meaningful error, but stuck in an infinite 
> loop in here:
> {code}
> KeyValueHeap.generalizedSeek()
> while ((scanner = heap.poll()) != null) {
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6778) Deprecate Chore; its a thread per task when we should have one thread to do all tasks

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6778:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-1.

Thanks for the sweet contrib [~jonathan.lawlor]

> Deprecate Chore; its a thread per task when we should have one thread to do 
> all tasks
> -
>
> Key: HBASE-6778
> URL: https://issues.apache.org/jira/browse/HBASE-6778
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
> Fix For: 2.0.0, 1.1.0
>
> Attachments: AFTER_thread_dump.txt, BEFORE_thread_dump.txt, 
> HBASE-6778-branch-1-v1.patch, HBASE_6778_WIP_v1.patch, 
> HBASE_6778_WIP_v2.patch, HBASE_6778_v1.patch, HBASE_6778_v2.patch, 
> HBASE_6778_v3.patch, HBASE_6778_v3.patch, HBASE_6778_v4.patch, 
> HBASE_6778_v5.patch, HBASE_6778_v6.patch, HBASE_6778_v6.patch, 
> thread_dump_HMaster.local.out
>
>
> Should use something like ScheduledThreadPoolExecutor instead (Elliott said 
> this first I think; J-D said something similar just now).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299362#comment-14299362
 ] 

Hadoop QA commented on HBASE-12070:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12695614/HBASE-12070.v1-branch-1.patch
  against branch-1 branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695614

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
12 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12650//console

This message is automatically generated.

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table th

[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

Let me try an hadoopqa run.  Thanks for +1 [~enis]

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 0.98.9, 1.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299337#comment-14299337
 ] 

Enis Soztutar commented on HBASE-12782:
---

+1. Thanks Stack for taking on this. 

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12782:
--
Fix Version/s: (was: 1.0.1)
   1.0.0

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Attachment: 12782v3.txt
12782v3.0.98.txt

Address [~lhofhansl] suggestion.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt, 12782v3.0.98.txt, 
> 12782v3.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299325#comment-14299325
 ] 

Jeffrey Zhong commented on HBASE-12782:
---

[~saint@gmail.com] Great findings! I previously reviewed the patch. The 
intention was good and should do "flush |=  restoreEdit(store, cell);" as 
[~lhofhansl] mentioned above but it apparently that the fix did more than that. 
Thanks.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299316#comment-14299316
 ] 

stack commented on HBASE-12782:
---

I missed that [~lhofhansl] remark. Let me address.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299310#comment-14299310
 ] 

Enis Soztutar commented on HBASE-12782:
---

I think that's what we wanted to do in the HBASE-11099 patch, but messed up. 

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299308#comment-14299308
 ] 

Enis Soztutar commented on HBASE-12782:
---

Should we do {{|=}} as Lars says above. Once flush is set, we do not want to 
unset it.  

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6778) Deprecate Chore; its a thread per task when we should have one thread to do all tasks

2015-01-30 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-6778:
---
Attachment: HBASE-6778-branch-1-v1.patch

Attaching patch for branch-1. Compiling and passing tests locally.

> Deprecate Chore; its a thread per task when we should have one thread to do 
> all tasks
> -
>
> Key: HBASE-6778
> URL: https://issues.apache.org/jira/browse/HBASE-6778
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
> Fix For: 2.0.0, 1.1.0
>
> Attachments: AFTER_thread_dump.txt, BEFORE_thread_dump.txt, 
> HBASE-6778-branch-1-v1.patch, HBASE_6778_WIP_v1.patch, 
> HBASE_6778_WIP_v2.patch, HBASE_6778_v1.patch, HBASE_6778_v2.patch, 
> HBASE_6778_v3.patch, HBASE_6778_v3.patch, HBASE_6778_v4.patch, 
> HBASE_6778_v5.patch, HBASE_6778_v6.patch, HBASE_6778_v6.patch, 
> thread_dump_HMaster.local.out
>
>
> Should use something like ScheduledThreadPoolExecutor instead (Elliott said 
> this first I think; J-D said something similar just now).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12914:

Fix Version/s: (was: 1.0.1)

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299270#comment-14299270
 ] 

Lars Hofhansl commented on HBASE-5954:
--

TestWALReplay passes locally every time I run it (and takes the same amount of 
time with or without the patch). So that looks to be unrelated.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, wal
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 5954-WIP-trunk.txt, 5954-WIP-v2-trunk.txt, 
> 5954-trunk-hdfs-trunk-v2.txt, 5954-trunk-hdfs-trunk-v3.txt, 
> 5954-trunk-hdfs-trunk-v4.txt, 5954-trunk-hdfs-trunk-v5.txt, 
> 5954-trunk-hdfs-trunk-v6.txt, 5954-trunk-hdfs-trunk.txt, 5954-v3-trunk.txt, 
> 5954-v3-trunk.txt, 5954-v4-trunk.txt, 5954-v5-trunk.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Attachment: 12782v2.0.98.txt

0.98 fix-only patch.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.0.98.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299261#comment-14299261
 ] 

stack commented on HBASE-12782:
---

bq. Any value in doing the fix the tooling in separate patches?

I'll make a fix-only patch for 0.98.

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299256#comment-14299256
 ] 

Lars Hofhansl edited comment on HBASE-12782 at 1/30/15 9:45 PM:


Cool... Was just about to check 0.94.
Any value in doing the fix the tooling in separate patches?



was (Author: lhofhansl):
Cool... Was just about to check 0.94.
Any value is doing the fix the tooling in separate patches?


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299256#comment-14299256
 ] 

Lars Hofhansl commented on HBASE-12782:
---

Cool... Was just about to check 0.94.
Any value is doing the fix the tooling in separate patches?


> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299252#comment-14299252
 ] 

Andrew Purtell commented on HBASE-12914:


Tagging whole interfaces and subsets both seem fine, as long as we are just 
modifying 0.98 at this point. If there's a strong desire to keep in 1.0 the 
experimental label for all of HFile v3 or subsets of features based on it, 
let's have that discussion on a new issue, so we can hash it out there. 

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12070:
---
Status: Patch Available  (was: In Progress)

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12070:
---
Attachment: HBASE-12070.v1-branch-1.patch

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12070:
---
Attachment: (was: HBASE-12070.v1-branch-1.patch)

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12070:
---
Status: In Progress  (was: Patch Available)

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299219#comment-14299219
 ] 

stack commented on HBASE-12782:
---

Good news is that this change is recent.  It is not in 0.94.  It came in here:

{code}
  43 commit 1c856e0774afd8e09ef68436cf57fc0aa61e974e
  44 Author: Enis Soztutar 
  45 Date:   Tue Dec 2 20:16:45 2014 -0800
  46
  47 HBASE-11099 Two situations where we could open a region with smaller 
sequence number (Stephen Jiang)
{code}

and here

{code}
3b4688f HBASE-11099 Two situations where we could open a region with smaller 
sequence number (Stephen Jiang)
kalashnikov-20:hbase.git.commit stack$ git show 3b4688f
commit 3b4688f9ee0fca0f3a65e245f3caa7e47699cad1
Author: tedyu 
Date:   Thu Nov 20 14:44:08 2014 -0800

HBASE-11099 Two situations where we could open a region with smaller 
sequence number (Stephen Jiang)
{code}

I think it an innocent attempt at adding brackets around what was a one-liner 
only it changed the evaluation.

It is in current 0.98 and in 1.0+

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Affects Version/s: 0.98.9

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Fix Version/s: 1.1.0

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0, 0.98.9
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Fix Version/s: 0.98.11
   2.0.0

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.0.1, 0.98.11
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Assignee: stack

> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.0.1
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12782) ITBLL fails for me if generator does anything but 5M per maptask

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12782:
--
Attachment: 12782v2.txt

Looks like this fix helps alot. I ran my rig and it passed (9 times out of ten 
it does not).  I then doubled up the counts so we did 250M instead of 125M and 
again it passed.  Will run some bigger tests over w/e.

Here is the patch I'd like to apply. It has the fix, an obnoxious unit test to 
verify the fix, and then the tooling I used to find the issue.  That patch is 
fat because it includes a big data file of recovered.edits to replay in the 
unit test.

Patch changes ITBLL to add better logging with more data around missing rows. 
It also amends the verify step in ITBLL to emit the binary missing along w/ the 
type of the missing data. This output is then useable by a new tool, a search, 
which takes the missing rows from verify and then goes off to search WALs and 
oldWALs. This latter tool was good for figuring where the edits had gone 
missing (ante- or post-WAL).

The search tool emits each time it finds a key.  This was useful narrowing in 
on the WALs that had the rows that  were missing.

I'd then take the name of the WAL that had the edits and then go look at its 
provenance.  In this case, the WALs were opened just before a crash and no 
flush had happened.  The WALs would then be split to produce recovered.edits.

The patch includes a means of having recovered.edits files moved to archive 
when done rather than delete (This is a change in HRegion).  This was useful 
for checking if the WAL split had actually moved the missing edits from WAL to 
recovered.edits. It had in this case, so then the replay of edits was suspect 
(of note, the recovered.edits files can be viewed with the WALPrettyPrinter -- 
which also has some improvements courtesy of this patch).

WALPlayer is used by the search tool in ITBLL.  Added a filter method so I 
could use the WALPlayer near directly when searching.

Made removing of files from archive or wherever DEBUG level rather than TRACE.

Made a minor improvement to recovered edits replay checking at the WALEdit 
level if the edit is for THIS region rather than doing the check per Cell. It 
will help some with the likes of the recovered.edits files I was seeing in my 
cluster testing where a single WALEdit had hundreds of Cells in it.

The actual fix in HRegion was a simple one-liner (see above).













> ITBLL fails for me if generator does anything but 5M per maptask
> 
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 1.0.0
>Reporter: stack
>Priority: Critical
> Fix For: 1.0.1
>
> Attachments: 12782.fix.txt, 
> 12782.search.plus.archive.recovered.edits.txt, 12782.search.plus.txt, 
> 12782.search.txt, 12782.unit.test.and.it.test.txt, 
> 12782.unit.test.writing.txt, 12782v2.txt
>
>
> Anyone else seeing this?  If I do an ITBLL with generator doing 5M rows per 
> maptask, all is good -- verify passes. I've been running 5 servers and had 
> one splot per server.  So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 5 500 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase 
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop 
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
> serverKilling Generator 10 500 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299187#comment-14299187
 ] 

Hadoop QA commented on HBASE-12070:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12695345/HBASE-12070.v1-branch-1.patch
  against branch-1 branch at commit b08802a3e8e522f84519415b83455870b49bf8da.
  ATTACHMENT ID: 12695345

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12649//console

This message is automatically generated.

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-01-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12070:
---
Status: Patch Available  (was: Open)

> Add an option to hbck to fix ZK inconsistencies
> ---
>
> Key: HBASE-12070
> URL: https://issues.apache.org/jira/browse/HBASE-12070
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 1.1.0
>Reporter: Sudarshan Kadambi
>Assignee: Stephen Yuan Jiang
> Fix For: 1.1.0
>
> Attachments: HBASE-12070.v1-branch-1.patch
>
>
> If the HMaster bounces in the middle of table creation, we could be left in a 
> state where a znode exists for the table, but that hasn't percolated into 
> META or to HDFS. We've run into this a couple times on our clusters. Once the 
> table is in this state, the only fix is to rm the znode using the 
> zookeeper-client. Doing this manually looks a bit error prone. Could an 
> option be added to hbck to catch and fix such inconsistencies?
> A more general issue I'd like comment on is whether it makes sense for 
> HMaster to be maintaining its own write-ahead log? The idea would be that on 
> a bounce, the master would discover it was in the middle of creating a table 
> and either rollback or complete that operation? An issue that we observed 
> recently was that a table that was in DISABLING state before a bounce was not 
> in that state after. A write-ahead log to persist table state changes seems 
> useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
> matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-01-30 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-12949:
-
Description: 
We've encountered problem where compaction hangs and never completes.
After looking into it further, we found that the compaction scanner was stuck 
in a infinite loop. See stack below.
{noformat}
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
{noformat}

We identified the hfile that seems to be corrupted.  Using HFile tool shows the 
following:
{noformat}
[biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k -m 
-f 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
org.apache.hadoop.util.PureJavaCrc32
15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
org.apache.hadoop.util.PureJavaCrc32C
15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
deprecated. Instead, use fs.defaultFS
Scanning -> 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
WARNING, previous row is greater then current row
filename -> 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
previous -> 
\x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
current  ->
Exception in thread "main" java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:489)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
{noformat}

Turning on Java Assert shows the following:
{noformat}
Exception in thread "main" java.lang.AssertionError: Key 
20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
 followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
{noformat}

It shows that the hfile seems to be corrupted -- the keys don't seem to be 
right.
But Scanner is not able to give a meaningful error, but stuck in an infinite 
loop in here:
{code}
KeyValueHeap.generalizedSeek()
while ((scanner = heap.poll()) != null) {
}
{code}

  was:
We've encountered problem where compaction hangs and never completes.
After looking into it further, we found that the compaction scanner was stuck 
in a infinite loop. See stack below.
{noformat}
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
{noformat}

We identified the hfile that seems to be corrupted.  Using HFile tool shows the 
following:
{noformat}
[biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k -m 
-f 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
org.apache.hadoop.util.PureJavaCrc32
15/01/23 11:53:18 INFO util.ChecksumType: Ch

[jira] [Updated] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-01-30 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-12949:
-
Affects Version/s: 0.94.3

> Scanner can be stuck in infinite loop if the HFile is corrupted
> ---
>
> Key: HBASE-12949
> URL: https://issues.apache.org/jira/browse/HBASE-12949
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.98.10
>Reporter: Jerry He
>
> We've encountered problem where compaction hangs and never completes.
> After looking into it further, we found that the compaction scanner was stuck 
> in a infinite loop. See stack below.
> {noformat}
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
> {noformat}
> We identified the hfile that seems to be corrupted.  Using HFile tool shows 
> the following:
> {noformat}
> [biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k 
> -m -f 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> 15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
> org.apache.hadoop.util.PureJavaCrc32
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
> org.apache.hadoop.util.PureJavaCrc32C
> 15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Scanning -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> WARNING, previous row is greater then current row
> filename -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> previous -> 
> \x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
> current  ->
> Exception in thread "main" java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:489)
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
> at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
> {noformat}
> Turning on Java Assert shows the following:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Key 
> 20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
>  followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
> {noformat}
> It shows that the hfile seems to be corrupted -- the keys don't seem to be 
> right.
> But Scanner is not able to give a meaningful error, but stuck in an infinite 
> loop in here:
> {code}
> KeyValueHeap.generalizedSeek()
> while ((scanner = heap.poll()) != null) {
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-01-30 Thread Jerry He (JIRA)
Jerry He created HBASE-12949:


 Summary: Scanner can be stuck in infinite loop if the HFile is 
corrupted
 Key: HBASE-12949
 URL: https://issues.apache.org/jira/browse/HBASE-12949
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10
Reporter: Jerry He


We've encountered problem where compaction hangs and never completes.
After looking into it further, we found that the compaction scanner was stuck 
in a infinite loop. See stack below.
{noformat}
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
{noformat}

We identified the hfile that seems to be corrupted.  Using HFile tool shows the 
following:
{noformat}
[biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k -m 
-f 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
org.apache.hadoop.util.PureJavaCrc32
15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
org.apache.hadoop.util.PureJavaCrc32C
15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
deprecated. Instead, use fs.defaultFS
Scanning -> 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
WARNING, previous row is greater then current row
filename -> 
/user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
previous -> 
\x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
current  ->
Exception in thread "main" java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:489)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
{noformat}

Turning on Java Assert shows the following:
{noformat}
Exception in thread "main" java.lang.AssertionError: Key 
20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
 followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
{noformat}

It shows that the hfile seems to be corrupted -- the keys don't seem to be 
right.
But Scanner is not able to give a meaningful error, but stuck in an infinite 
loop in here:
{code}
KeyValueHeap.generalizedSeek()
while ((scanner = heap.poll()) != null) {
}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299042#comment-14299042
 ] 

Sean Busbey commented on HBASE-12914:
-

I think I was the one who originally included a fix version on the 1.x line. I 
only did that because I wasn't sure if we were comfortable holding to the 
stronger stability promises after the 0.98 -> 1.0 transition. If we are, that's 
excellent by me.

Are updates to the ref guide in master and the API annotations only in 0.98 
fine by everyone?

[~apurtell], would you prefer this just be updates to java Interfaces?

I originally scoped the title at "features" to be broader than just interfaces 
because I wasn't sure if we had broken compatibility elsewhere. I figured broad 
warnings that end up not having breakage are better than narrow ones that miss 
something.

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12947) Replicating DDL statements like create from one cluster to another

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299030#comment-14299030
 ] 

Andrew Purtell commented on HBASE-12947:


How would you handle conflicting concurrent schema updates? While HBase schemas 
are in many ways simpler than RDBMS schemas, you might find this paper about 
how Google's F1 manages distributed schema change interesting: 
http://db.disi.unitn.eu/pages/VLDBProgram/pdf/industry/p764-rae.pdf . We'd have 
some of the same implementation challenges. Start by reading section 4.2. 

> Replicating DDL statements like create  from one cluster to another
> ---
>
> Key: HBASE-12947
> URL: https://issues.apache.org/jira/browse/HBASE-12947
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Prabhu Joseph
>Priority: Critical
> Fix For: 2.0.0
>
>
> Problem:
>   When tables are created dynamically in Hbase cluster, the Replication 
> feature can't be used as the new table does not exist in peer cluster. To use 
> the replication, we need to create same table in peer cluster also.
>Having API to replicate the create table statement at peer cluster will be 
> more helpful in such cases.
> Solution:
> create 'table','cf',replication => true , peerFlag => true
> if peerFlag = true, the table with the column family has to be created at 
> peer
> cluster.
>Special cases like enabling replication at peer cluster also for cyclic 
> replication has to be considered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12947) Replicating DDL statements like create from one cluster to another

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299020#comment-14299020
 ] 

Andrew Purtell edited comment on HBASE-12947 at 1/30/15 6:36 PM:
-

bq. When tables are created dynamically in Hbase cluster, the Replication 
feature can't be used as the new table does not exist in peer cluster. To use 
the replication, we need to create same table in peer cluster also.

This has been by design up to now because auto-synchronizing schema updates 
among multiple sites is a challenging problem, and more so in the presence of 
cyclical relationships, as you mention. Aside from technical concerns there are 
also policy considerations. 

I like this proposal in that propagation of schema changes to other sites is 
optional, it must be enabled with a flag. In addition there should be strong 
security and/or configuration barriers to accidental schema propagation 
because, as a multi-site operator, I don't want unauthorized, ill-advised, or 
incorrect (operator error) changes at one site automatically propagating to 
others.


was (Author: apurtell):
bq. When tables are created dynamically in Hbase cluster, the Replication 
feature can't be used as the new table does not exist in peer cluster. To use 
the replication, we need to create same table in peer cluster also.

This has been by design up to now because auto-synchronizing schema updates 
among multiple sites is a challenging problem, and more so in the presence of 
cyclical relationships, as you mention. Aside from technical concerns there are 
also policy considerations. 

I like this proposal in that propagation of schema changes to other sites is 
optional, it must be enabled with a flag. In addition there should be strong 
security and/or configuration barriers to accidental schema propagation 
because, as a multi-site operator, I don't want unauthorized or ill-advised 
changes at one site automatically propagating to others.

> Replicating DDL statements like create  from one cluster to another
> ---
>
> Key: HBASE-12947
> URL: https://issues.apache.org/jira/browse/HBASE-12947
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Prabhu Joseph
>Priority: Critical
> Fix For: 2.0.0
>
>
> Problem:
>   When tables are created dynamically in Hbase cluster, the Replication 
> feature can't be used as the new table does not exist in peer cluster. To use 
> the replication, we need to create same table in peer cluster also.
>Having API to replicate the create table statement at peer cluster will be 
> more helpful in such cases.
> Solution:
> create 'table','cf',replication => true , peerFlag => true
> if peerFlag = true, the table with the column family has to be created at 
> peer
> cluster.
>Special cases like enabling replication at peer cluster also for cyclic 
> replication has to be considered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12947) Replicating DDL statements like create from one cluster to another

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299020#comment-14299020
 ] 

Andrew Purtell commented on HBASE-12947:


bq. When tables are created dynamically in Hbase cluster, the Replication 
feature can't be used as the new table does not exist in peer cluster. To use 
the replication, we need to create same table in peer cluster also.

This has been by design up to now because auto-synchronizing schema updates 
among multiple sites is a challenging problem, and more so in the presence of 
cyclical relationships, as you mention. Aside from technical concerns there are 
also policy considerations. 

I like this proposal in that propagation of schema changes to other sites is 
optional, it must be enabled with a flag. In addition there should be strong 
security and/or configuration barriers to accidental schema propagation 
because, as a multi-site operator, I don't want unauthorized or ill-advised 
changes at one site automatically propagating to others.

> Replicating DDL statements like create  from one cluster to another
> ---
>
> Key: HBASE-12947
> URL: https://issues.apache.org/jira/browse/HBASE-12947
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Prabhu Joseph
>Priority: Critical
> Fix For: 2.0.0
>
>
> Problem:
>   When tables are created dynamically in Hbase cluster, the Replication 
> feature can't be used as the new table does not exist in peer cluster. To use 
> the replication, we need to create same table in peer cluster also.
>Having API to replicate the create table statement at peer cluster will be 
> more helpful in such cases.
> Solution:
> create 'table','cf',replication => true , peerFlag => true
> if peerFlag = true, the table with the column family has to be created at 
> peer
> cluster.
>Special cases like enabling replication at peer cluster also for cyclic 
> replication has to be considered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12946) Decouple MapReduce pieces from hbase-server

2015-01-30 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298987#comment-14298987
 ] 

Nick Dimiduk commented on HBASE-12946:
--

Now that we have abstractions at the client-api level, I think the limiting 
factor is the map reduce code for writing HFiles. As [~enis] commented in the 
thread you linked, we should have a separate hbase-storage module that both a 
hbase-server and hbase-mapreduce modules can depend upon.

> Decouple MapReduce pieces from hbase-server
> ---
>
> Key: HBASE-12946
> URL: https://issues.apache.org/jira/browse/HBASE-12946
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Francke
>
> I could not find an existing issue for this.
> This has come up multiple times publicly:
> * http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/41916
> * http://gbif.blogspot.de/2014/11/upgrading-our-cluster-from-cdh4-to-cdh5.html
> It'd be great if we could either move the mapreduce pieces to hbase-client or 
> get a hbase-mapreduce component. Some things seem easy to decouple others not 
> so much.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298892#comment-14298892
 ] 

Andrew Purtell edited comment on HBASE-12914 at 1/30/15 5:17 PM:
-

No, this issue is about marking some interfaces in 0.98 -- it's right there in 
the subject -- unstable, because we said HFileV3 and all features depending on 
it are experimental for the lifetime of the release line. We should not be 
marking them unstable in 1.0, since HFileV3 is no longer declared experimental. 
It's a separate issue if you want to look at the features individually. I think 
a branch-1 patch with unstable tags is incorrect in the scope of this issue and 
with respect to the status of HFile v3 in 1.0 (if it's not stable, why is it 
the default??) 


was (Author: apurtell):
No, this issue is about marking classes in 0.98 -- it's right there in the 
subject -- that are unstable because we said HFileV3 and all features depending 
on it are experimental for the lifetime of the release line. We should not be 
marking them unstable in 1.0, since HFileV3 is no longer declared experimental. 
It's a separate issue if you want to look at the features individually. I think 
a branch-1 patch with unstable tags is incorrect in the scope of this issue and 
with respect to the status of HFile v3 in 1.0 (if it's not stable, why is it 
the default??) 

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298897#comment-14298897
 ] 

Andrew Purtell commented on HBASE-12914:


Let me formally -1 the branch 1 patch.

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298896#comment-14298896
 ] 

Andrew Purtell commented on HBASE-12914:


The *only* reason any of the mentioned interfaces are "unstable", relative to 
any other change we have made such as region replicas, API refactorings, load 
pushback, etc. etc. is specifically when HFile V3 was introduced out of an 
abundance of caution we explicitly said it would be experimental for the 
duration of the 0.98 release. The discussion above has gotten way out of hand 
in my opinion.

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298892#comment-14298892
 ] 

Andrew Purtell commented on HBASE-12914:


No, this issue is about marking classes in 0.98 -- it's right there in the 
subject -- that are unstable because we said HFileV3 and all features depending 
on it are experimental for the lifetime of the release line. We should not be 
marking them unstable in 1.0, since HFileV3 is no longer declared experimental. 
It's a separate issue if you want to look at the features individually. I think 
a branch-1 patch with unstable tags is incorrect in the scope of this issue and 
with respect to the status of HFile v3 in 1.0 (if it's not stable, why is it 
the default??) 

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12808) Use Java API Compliance Checker for binary/source compatibility

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298871#comment-14298871
 ] 

stack commented on HBASE-12808:
---

This is excellent.

> Use Java API Compliance Checker for binary/source compatibility
> ---
>
> Key: HBASE-12808
> URL: https://issues.apache.org/jira/browse/HBASE-12808
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Dima Spivak
>Assignee: Dima Spivak
> Fix For: 2.0.0
>
> Attachments: 0.98.9_branch-1.0_compat_report.html, 
> HBASE-12808_v1.patch, HBASE-12808_v2.patch, HBASE-12808_v3.patch, 
> HBASE-12808_v4.patch, HBASE-12808_v5.patch
>
>
> Following [~busbey]'s suggestion in HBASE-12556, I've spent some time playing 
> with the [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] 
> and think it would be a great addition to /dev-support. I propose that we use 
> it to replace the JDiff wrappers we currently have there (since it does what 
> JDiff does and more), and look into putting up automation at 
> builds.apache.org to run the tool regularly (e.g. latest release of a 
> particular branch vs. latest commit of that same branch).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12808) Use Java API Compliance Checker for binary/source compatibility

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12808:
--
Release Note: Adds a dev-support/check_compatibility.sh script for 
comparing versions. Run the script to see usage.

> Use Java API Compliance Checker for binary/source compatibility
> ---
>
> Key: HBASE-12808
> URL: https://issues.apache.org/jira/browse/HBASE-12808
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Dima Spivak
>Assignee: Dima Spivak
> Fix For: 2.0.0
>
> Attachments: 0.98.9_branch-1.0_compat_report.html, 
> HBASE-12808_v1.patch, HBASE-12808_v2.patch, HBASE-12808_v3.patch, 
> HBASE-12808_v4.patch, HBASE-12808_v5.patch
>
>
> Following [~busbey]'s suggestion in HBASE-12556, I've spent some time playing 
> with the [Java API Compliance 
> Checker|http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker] 
> and think it would be a great addition to /dev-support. I propose that we use 
> it to replace the JDiff wrappers we currently have there (since it does what 
> JDiff does and more), and look into putting up automation at 
> builds.apache.org to run the tool regularly (e.g. latest release of a 
> particular branch vs. latest commit of that same branch).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2015-01-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11819:
--
Attachment: HBASE-11819v5-master (1).patch

> Unit test for CoprocessorHConnection 
> -
>
> Key: HBASE-11819
> URL: https://issues.apache.org/jira/browse/HBASE-11819
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
>Priority: Minor
>  Labels: newbie++
> Fix For: 2.0.0, 1.1.0, 0.98.11
>
> Attachments: HBASE-11819v4-master.patch, HBASE-11819v5-master 
> (1).patch, HBASE-11819v5-master.patch, HBASE-11819v5-master.patch, 
> HBASE-11819v5-v0.98.patch, HBASE-11819v5-v1.0.patch
>
>
> Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12946) Decouple MapReduce pieces from hbase-server

2015-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298841#comment-14298841
 ] 

stack commented on HBASE-12946:
---

Yes. We've talked of doing this for a long time.  I was hoping the new API 
would make this effort easier to do.

> Decouple MapReduce pieces from hbase-server
> ---
>
> Key: HBASE-12946
> URL: https://issues.apache.org/jira/browse/HBASE-12946
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Francke
>
> I could not find an existing issue for this.
> This has come up multiple times publicly:
> * http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/41916
> * http://gbif.blogspot.de/2014/11/upgrading-our-cluster-from-cdh4-to-cdh5.html
> It'd be great if we could either move the mapreduce pieces to hbase-client or 
> get a hbase-mapreduce component. Some things seem easy to decouple others not 
> so much.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298830#comment-14298830
 ] 

Sean Busbey commented on HBASE-12914:
-

Few more:

* org.apache.hadoop.hbase.security.visibility.* (i.e. Authorizations, 
CellVisibility, VisibilityClient, etc)
* org.apache.hadoop.hbase.mapreduce.CellCreator VISIBILITY_EXP_RESOLVER_CLASS
* org.apache.hadoop.hbase.mapreduce.CellCreator the methods that takes a vis 
expressions and getVisibilityExpressionResolver
* org.apache.hadoop.hbase.mapreduce.ImportTsv CELL_VISIBILITY_COLUMN_SPEC, 
CELL_TTL_COLUMN_SPEC, DEFAULT_CELL_VISIBILITY_COLUMN_INDEX, 
DEFAULT_CELL_TTL_COLUMN_INDEX
* org.apache.hadoop.hbase.mapreduce.ImportTsv all of the public methods related 
to per-cell ttl or visibility
* org.apache.hadoop.hbase.mapreduce.TsvImporterMapper members 
cellVisibilityExpr and ttl


on branch-1 (but not an issue on 0.98)
* We should annotate the per-cell methods that are unstable in sub-classes of 
Mutation as well, to remove ambiguity.
* For the same reason, Scan needs the same things annotated as Query

I'm not clear on if we're considering APIs related to Cell Tags themselves 
unstable. If so
* HColumnDescriptor methods about how tags should be handled
* Cell methods related to tags
* CellUtil methods related to tags
* org.apache.hadoop.hbase.mapreduce.CellCreator any of the methods that use Tag

> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12914) Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade section

2015-01-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298797#comment-14298797
 ] 

Sean Busbey commented on HBASE-12914:
-

I presume anything in the shell is covered under our "Operational 
Compatibility" section? That would mean we don't need to mark any of it, since 
it has the same limitations as Public Unstable.

This is getting big enough that a reviewboard would help (atleast for me).

* please do documentation updates on master branch; we use that to generate 
documentation for all versions.

Things that also need to be marked Unstable:
* HColumnDescriptor ENCRYPTION and ENCRYPTION_KEY
* HConstants CRYPTO_*
* org.apache.hadoop.hbase.io.crypto.*
* org.apache.hadoop.hbase.util.EncryptionTest

Should we be marking the protobuf changes as well?

* Encryption.proto
* HFile.proto trailer field encryption_key


> Mark public features that require HFilev3 Unstable in 0.98, warn in upgrade 
> section
> ---
>
> Key: HBASE-12914
> URL: https://issues.apache.org/jira/browse/HBASE-12914
> Project: HBase
>  Issue Type: Bug
>  Components: API, documentation
>Affects Versions: 0.98.6, 0.98.7, 0.98.8, 0.98.9
>Reporter: Sean Busbey
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12914-branch-1.patch, HBASE-12914.patch
>
>
> There are several features in 0.98 that require enabling HFilev3 support. 
> Some of those features include new extendable components that are marked 
> IA.Public.
> Current practice has been to treat these features as experimental. This has 
> included pushing non-compatible changes to branch-1 as the API got worked out 
> through use in 0.98.
> * Update all of the IA.Public classes involved to make sure they are 
> IS.Unstable in 0.98.
> * Update the ref guide section on upgrading from 0.98 -> 1.0 to make folks 
> aware of these changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11164) Document and test rolling updates from 0.98 -> 1.0

2015-01-30 Thread suraj misra (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298728#comment-14298728
 ] 

suraj misra commented on HBASE-11164:
-

Ok. Just for information , in case if someone wants to go back to older version 
for some or other reason.
I changed the hbase and zooker directory in hbase-site.xml then I was able to 
start older version as well.  

> Document and test rolling updates from 0.98 -> 1.0
> --
>
> Key: HBASE-11164
> URL: https://issues.apache.org/jira/browse/HBASE-11164
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: stack
>Priority: Critical
> Fix For: 0.99.2
>
>
> I think 1.0 should be rolling upgradable from 0.98 unless we break it 
> intentionally for a specific reason. Unless there is such an issue, lets 
> document that 1.0 and 0.98 should be rolling upgrade compatible. 
> We should also test this before the 0.99 release. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >