Hi,


There are no inconsistencies in hbck output and no corrupt blocks in fsck 
output.



But getting the same exception (after getting some results) during scanning for 
rows in the particular regions. 



Thanks.

Meeran



---- On Fri, 10 Jul 2020 17:45:13 +0530 Viraj Jasani <vjas...@apache.org> wrote 
----


Hi Meeran, 
 
BlockHeaders output has NegativeArraySizeException while reading a block. 
Did you try scanning the table or specific rowkey range from the region? Is 
it all good? 
Also, since you were able to upgrade the cluster to 2.2.4, I am assuming 
all services are good, but can you once confirm inconsistencies using 
hbck and fsck commands for both HBase and HDFS? 
 
 
On 2020/07/08 07:44:36, Meeran <mailto:meeran.gladiat...@gmail.com> wrote: 
> Hi Sean, 
> 
> 
> 
> We upgraded the cluster to latest stable version HBase-2.2.4. We are still 
> facing the issue. Any help on this please? 
> 
> 
> 
> Thanks, 
> 
> Meeran 
> 
> 
> 
> ---- On Mon, 06 Jul 2020 14:24:16 +0530 test gmail test 
> <mailto:meeran.gladiat...@gmail.com> wrote ---- 
> 
> 
> Hi Sean, 
> 
> 
> 
> printblocks output - https://pastebin.com/EYUpi6LL 
> 
> blockheaders output - https://pastebin.com/TJBqgwsp 
> 
> 
> 
> We are yet to test it on HBase-2.2. Will upgrade the cluster and let you 
> know. Thanks for the help. 
> 
>   
> 
> Regards, 
> 
> Meeran 
> 
> 
> 
> 
> 
> ---- On Sat, 04 Jul 2020 05:26:46 +0530 Sean Busbey 
> <mailto:mailto:bus...@apache.org> wrote ---- 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> File attachments won't work on the mailing list. Can you put the files on 
> some hosting service? 
> 
> Can you reproduce the problem on hbase 2.2? HBase 2.1 has been EOM since 
> May. 
> 
> 
> On Fri, Jul 3, 2020, 18:20 Mohamed Meeran 
> <mailto:mailto:meeran.gladiat...@gmail.com> 
> wrote: 
> 
> > Hi, 
> > 
> > We are using HBase-2.1.9 (Hadoop-3.1.3) in our setup. In the logs, we see 
> > major compaction failed for some of the regions with the following error 
> > logs. 
> > 
> > Caused by: java.io.IOException: Could not iterate 
> > StoreFileScanner[HFileScanner for reader 
> > reader=hdfs://TestCluster/hbasedata/data/Test/Test/6472f3839fc9b0a1d4b64e182043bc52/hb/2ec37243628b4a03ae3d937da4c27081,
> >  
> > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=332, 
> > currentSize=485.88 MB, freeSize=333.32 MB, maxSize=819.20 MB, 
> > heapSize=485.88 MB, minSize=778.24 MB, minFactor=0.95, multiSize=389.12 MB, 
> > multiFactor=0.5, singleSize=194.56 MB, singleFactor=0.25}, 
> > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> > cacheBloomsOnWrite=false, cacheEvictOnClose=false, 
> > cacheDataCompressed=false, prefetchOnOpen=false, 
> > firstKey=Optional[10259783_1010157000000008129/hb:B/1490097103780/Put/seqid=0],
> >  
> > lastKey=Optional[10260211_1009658000000470017/hb:H/1490097295354/Put/seqid=0],
> >  
> > avgKeyLen=43, avgValueLen=213357, entries=10134, length=2163318554, 
> > cur=10259783_1010157000000008851/hb:B/1490097148981/Put/vlen=16591695/seqid=0]
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:217)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:654)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6593)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6757)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6527)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3158)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3407)
> >  
> > ... 5 more 
> > Caused by: java.io.IOException: Invalid onDisksize=-969694035: expected to 
> > be at least 33 and at most 2147483647, or -1 
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.checkAndGetSizeAsInt(HFileBlock.java:1673)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1746)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1610)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1496)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:931)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.isNextBlock(HFileReaderImpl.java:1064)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.positionForNextBlock(HFileReaderImpl.java:1058)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1076)
> >  
> > at 
> > org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
> >  
> > at 
> > org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
> >  
> > ... 13 more 
> > 
> > We analysed a file using the hfile tool. Attaching the output for 
> > printblocks and printblockheaders. 
> > 
> > Any help to fix this would be greatly appreciated. 
> > -- 
> > Thanks in advance, 
> >      Meeran 
> >

Reply via email to