Re: Could not iterate StoreFileScanner - during compaction

2020-07-13 Thread Meeran
Hi,



There are no inconsistencies in hbck output and no corrupt blocks in fsck 
output.



But getting the same exception (after getting some results) during scanning for 
rows in the particular regions. 



Thanks.

Meeran



 On Fri, 10 Jul 2020 17:45:13 +0530 Viraj Jasani  wrote 



Hi Meeran, 
 
BlockHeaders output has NegativeArraySizeException while reading a block. 
Did you try scanning the table or specific rowkey range from the region? Is 
it all good? 
Also, since you were able to upgrade the cluster to 2.2.4, I am assuming 
all services are good, but can you once confirm inconsistencies using 
hbck and fsck commands for both HBase and HDFS? 
 
 
On 2020/07/08 07:44:36, Meeran  wrote: 
> Hi Sean, 
> 
> 
> 
> We upgraded the cluster to latest stable version HBase-2.2.4. We are still 
> facing the issue. Any help on this please? 
> 
> 
> 
> Thanks, 
> 
> Meeran 
> 
> 
> 
>  On Mon, 06 Jul 2020 14:24:16 +0530 test gmail test 
>  wrote  
> 
> 
> Hi Sean, 
> 
> 
> 
> printblocks output - https://pastebin.com/EYUpi6LL 
> 
> blockheaders output - https://pastebin.com/TJBqgwsp 
> 
> 
> 
> We are yet to test it on HBase-2.2. Will upgrade the cluster and let you 
> know. Thanks for the help. 
> 
>   
> 
> Regards, 
> 
> Meeran 
> 
> 
> 
> 
> 
>  On Sat, 04 Jul 2020 05:26:46 +0530 Sean Busbey 
>  wrote  
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> File attachments won't work on the mailing list. Can you put the files on 
> some hosting service? 
> 
> Can you reproduce the problem on hbase 2.2? HBase 2.1 has been EOM since 
> May. 
> 
> 
> On Fri, Jul 3, 2020, 18:20 Mohamed Meeran 
>  
> wrote: 
> 
> > Hi, 
> > 
> > We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see 
> > major compaction failed for some of the regions with the following error 
> > logs. 
> > 
> > Caused by: java.io.IOException: Could not iterate 
> > StoreFileScanner[HFileScanner for reader 
> > reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
> >  
> > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, 
> > currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, 
> > heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, 
> > multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, 
> > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> > cacheBloomsOnWrite=false, cacheEvictOnClose=false, 
> > cacheDataCompressed=false, prefetchOnOpen=false, 
> > firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0],
> >  
> > lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0],
> >  
> > avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, 
> > cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
> >  
> > ... 5 more 
> > Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to 
> > be at least 33 and at most 2147483647, or -1 
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
> >  
> > at 
> > 

Re: Could not iterate StoreFileScanner - during compaction

2020-07-10 Thread Viraj Jasani
Hi Meeran,

BlockHeaders output has NegativeArraySizeException while reading a block.
Did you try scanning the table or specific rowkey range from the region? Is
it all good?
Also, since you were able to upgrade the cluster to 2.2.4, I am assuming
all services are good, but can you once confirm inconsistencies using
hbck and fsck commands for both HBase and HDFS?


On 2020/07/08 07:44:36, Meeran  wrote: 
> Hi Sean,
> 
> 
> 
> We upgraded the cluster to latest stable version HBase-2.2.4. We are still 
> facing the issue. Any help on this please?
> 
> 
> 
> Thanks,
> 
> Meeran
> 
> 
> 
>  On Mon, 06 Jul 2020 14:24:16 +0530 test gmail test 
>  wrote 
> 
> 
> Hi Sean,
> 
> 
> 
> printblocks output - https://pastebin.com/EYUpi6LL
> 
> blockheaders output - https://pastebin.com/TJBqgwsp
> 
> 
> 
> We are yet to test it on HBase-2.2. Will upgrade the cluster and let you 
> know. Thanks for the help.
> 
>  
> 
> Regards,
> 
> Meeran
> 
> 
> 
> 
> 
>  On Sat, 04 Jul 2020 05:26:46 +0530 Sean Busbey 
>  wrote 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> File attachments won't work on the mailing list. Can you put the files on 
> some hosting service? 
>  
> Can you reproduce the problem on hbase 2.2? HBase 2.1 has been EOM since 
> May. 
>  
>  
> On Fri, Jul 3, 2020, 18:20 Mohamed Meeran 
>  
> wrote: 
>  
> > Hi, 
> > 
> > We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see 
> > major compaction failed for some of the regions with the following error 
> > logs. 
> > 
> > Caused by: java.io.IOException: Could not iterate 
> > StoreFileScanner[HFileScanner for reader 
> > reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
> >  
> > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, 
> > currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, 
> > heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, 
> > multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, 
> > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> > cacheBloomsOnWrite=false, cacheEvictOnClose=false, 
> > cacheDataCompressed=false, prefetchOnOpen=false, 
> > firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0],
> >  
> > lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0],
> >  
> > avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, 
> > cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
> >  
> > ... 5 more 
> > Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to 
> > be at least 33 and at most 2147483647, or -1 
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
> >  
> > ... 13 more 
> > 
> > We analysed a file using the hfile tool. Attaching the output for 
> > printblocks and printblockheaders. 

Re: Could not iterate StoreFileScanner - during compaction

2020-07-08 Thread Meeran
Hi Sean,



We upgraded the cluster to latest stable version HBase-2.2.4. We are still 
facing the issue. Any help on this please?



Thanks,

Meeran



 On Mon, 06 Jul 2020 14:24:16 +0530 test gmail test 
 wrote 


Hi Sean,



printblocks output - https://pastebin.com/EYUpi6LL

blockheaders output - https://pastebin.com/TJBqgwsp



We are yet to test it on HBase-2.2. Will upgrade the cluster and let you know. 
Thanks for the help.

 

Regards,

Meeran





 On Sat, 04 Jul 2020 05:26:46 +0530 Sean Busbey  
wrote 











File attachments won't work on the mailing list. Can you put the files on 
some hosting service? 
 
Can you reproduce the problem on hbase 2.2? HBase 2.1 has been EOM since 
May. 
 
 
On Fri, Jul 3, 2020, 18:20 Mohamed Meeran  
wrote: 
 
> Hi, 
> 
> We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see 
> major compaction failed for some of the regions with the following error 
> logs. 
> 
> Caused by: java.io.IOException: Could not iterate 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
>  
> compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, 
> currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, 
> heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, 
> multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, 
> cacheDataCompressed=false, prefetchOnOpen=false, 
> firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0],
>  
> lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0],
>  
> avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, 
> cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
>  
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654) 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
>  
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
>  
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
>  
> ... 5 more 
> Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to 
> be at least 33 and at most 2147483647, or -1 
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
>  
> ... 13 more 
> 
> We analysed a file using the hfile tool. Attaching the output for 
> printblocks and printblockheaders. 
> 
> Any help to fix this would be greatly appreciated. 
> -- 
> Thanks in advance, 
>  Meeran 
>

Re: 回复:Could not iterate StoreFileScanner - during compaction

2020-07-06 Thread Meeran
Hi,



We did not face this issue in our previous version of HBase-1.4.x 
(hadoop-2.7.3). We recently upgraded our cluster to hbase-2.1.9(hadoop-3.1.3) 
and enabled erasure coding policy (XOR-2-1-1024k) for testing purpose.



We faced the issue mentioned in the following JIRA when one of the datanode 
went unreachable state. 



https://issues.apache.org/jira/browse/HDFS-14175



We applied the patch and fixed it. 



I guess we are facing this issue after this . 

Also hbck2 filesystem report looks fine. 



Regards,

Meeran.








 On Sat, 04 Jul 2020 09:02:06 +0530 zheng wang <18031...@qq.com> wrote 



Hi, 
"cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695" 
"Invalid onDisksize=-969694035: expected to be at least 33 and at most 
2147483647, or -1" 
 
 
I guess there is a very big cell causethe block size exceed 
theInteger.MAX_VALUE, andlead to overflowerror. 
 
 
 
 
 
--原始邮件-- 
发件人:"Mohamed Meeran"<mailto:meeran.gladiat...@gmail.com;; 
发送时间:2020年7月3日(星期五) 晚上10:26 
收件人:"user"<mailto:user@hbase.apache.org;; 
 
主题:Could not iterate StoreFileScanner - during compaction 
 
 
 
Hi, 
 
We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see major 
compaction failed for some of the regions with the following error logs. 
 
 
Caused by: java.io.IOException: Could not iterate StoreFileScanner[HFileScanner 
for reader 
reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
 compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, 
currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, heapSize=485.88 
MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, multiFactor=0.5, 
singleSize=194.56 MB, singleFactor=0.25}, cacheDataOnRead=true, 
cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, 
cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, 
firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0], 
lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0], 
avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, 
cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0] 
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
 
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120) 
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654) 
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153) 
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
 
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
 
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
 
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
 
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
 
... 5 more 
Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to be 
at least 33 and at most 2147483647, or -1 
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
 
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
 
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
 
... 13 more 
 
We analysed a file using the hfile tool. Attaching the output for printblocks 
and printblockheaders. 
 
 
Any help to fix this would be greatly appreciated. 
-- 
Thanks in advance, 
  Meeran

Re: Could not iterate StoreFileScanner - during compaction

2020-07-06 Thread test gmail test
Hi Sean,



printblocks output - https://pastebin.com/EYUpi6LL

blockheaders output - https://pastebin.com/TJBqgwsp



We are yet to test it on HBase-2.2. Will upgrade the cluster and let you know. 
Thanks for the help.
 

Regards,

Meeran




 On Sat, 04 Jul 2020 05:26:46 +0530 Sean Busbey  wrote 




File attachments won't work on the mailing list. Can you put the files on 
some hosting service? 
 
Can you reproduce the problem on hbase 2.2? HBase 2.1 has been EOM since 
May. 
 
 
On Fri, Jul 3, 2020, 18:20 Mohamed Meeran  
wrote: 
 
> Hi, 
> 
> We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see 
> major compaction failed for some of the regions with the following error 
> logs. 
> 
> Caused by: java.io.IOException: Could not iterate 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
>  
> compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, 
> currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, 
> heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, 
> multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, 
> cacheDataCompressed=false, prefetchOnOpen=false, 
> firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0],
>  
> lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0],
>  
> avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, 
> cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
>  
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654) 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
>  
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
>  
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
>  
> ... 5 more 
> Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to 
> be at least 33 and at most 2147483647, or -1 
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
>  
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
>  
> ... 13 more 
> 
> We analysed a file using the hfile tool. Attaching the output for 
> printblocks and printblockheaders. 
> 
> Any help to fix this would be greatly appreciated. 
> -- 
> Thanks in advance, 
>  Meeran 
>

??????Could not iterate StoreFileScanner - during compaction

2020-07-03 Thread zheng wang
Hi,
"cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695"
"Invalid onDisksize=-969694035: expected to be at least 33 and at most 
2147483647, or -1"


I guess there is a very big cell causethe block size exceed 
theInteger.MAX_VALUE, andlead to overflowerror.





----
??:"Mohamed Meeran"

Re: Could not iterate StoreFileScanner - during compaction

2020-07-03 Thread Sean Busbey
File attachments won't work on the mailing list. Can you put the files on
some hosting service?

Can you reproduce the problem on hbase 2.2? HBase 2.1 has been EOM since
May.


On Fri, Jul 3, 2020, 18:20 Mohamed Meeran 
wrote:

> Hi,
>
> We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see
> major compaction failed for some of the regions with the following error
> logs.
>
> Caused by: java.io.IOException: Could not iterate
> StoreFileScanner[HFileScanner for reader
> reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
> compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332,
> currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB,
> heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB,
> multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25},
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false,
> cacheBloomsOnWrite=false, cacheEvictOnClose=false,
> cacheDataCompressed=false, prefetchOnOpen=false,
> firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0],
> lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0],
> avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554,
> cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
> ... 5 more
> Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to
> be at least 33 and at most 2147483647, or -1
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
> ... 13 more
>
> We analysed a file using the hfile tool. Attaching the output for
> printblocks and printblockheaders.
>
> Any help to fix this would be greatly appreciated.
> --
> Thanks in advance,
>  Meeran
>


Could not iterate StoreFileScanner - during compaction

2020-07-03 Thread Mohamed Meeran
Hi,

We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see
major compaction failed for some of the regions with the following error
logs.

Caused by: java.io.IOException: Could not iterate
StoreFileScanner[HFileScanner for reader
reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332,
currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB,
heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB,
multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25},
cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false,
cacheBloomsOnWrite=false, cacheEvictOnClose=false,
cacheDataCompressed=false, prefetchOnOpen=false,
firstKey=Optional[10259783_10101578129/hb:B/1490097103780/Put/seqid=0],
lastKey=Optional[10260211_100965800470017/hb:H/1490097295354/Put/seqid=0],
avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554,
cur=10259783_10101578851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
... 5 more
Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to
be at least 33 and at most 2147483647, or -1
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
... 13 more

We analysed a file using the hfile tool. Attaching the output for
printblocks and printblockheaders.

Any help to fix this would be greatly appreciated.
-- 
Thanks in advance,
 Meeran