[ 
https://issues.apache.org/jira/browse/HBASE-5885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13263288#comment-13263288
 ] 

Elliott Clark commented on HBASE-5885:
--------------------------------------

Stack and I were looking into this; I'll pick it back up first thing tomorrow 
morning.  Brain dump below so that I remember it all in the morning.

Running a standalone instance with hbase.regionserver.checksum.verify to true 
will see an error when trying to read.
Running a standalone instance on the same data with 
hbase.regionserver.checksum.verify set to false will not see this error and all 
data looks intact.

The first read through an HFileReader seems to be fine no matter what.  It's 
the second read that seems to be erroring out. 

Looking at data files they all start with 'DATABLK*' and there are several of 
those headers in the files. So data too much data is being written per block, 
or not enough is being read.
                
> Invalid HFile block magic on Local file System
> ----------------------------------------------
>
>                 Key: HBASE-5885
>                 URL: https://issues.apache.org/jira/browse/HBASE-5885
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.94.0, 0.96.0
>            Reporter: Elliott Clark
>            Priority: Blocker
>             Fix For: 0.94.0
>
>
> ERROR: java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=7, exceptions:
> Thu Apr 26 11:19:18 PDT 2012, 
> org.apache.hadoop.hbase.client.ScannerCallable@190a621a, java.io.IOException: 
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner for 
> reader 
> reader=file:/tmp/hbase-eclark/hbase/TestTable/e2d1c846363c75262cbfd85ea278b342/info/bae2681d63734066957b58fe791a0268,
>  compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=0000000001/info:data/1335463981520/Put, 
> lastKey=0002588100/info:data/1335463902296/Put, avgKeyLen=30, 
> avgValueLen=1000, entries=1215085, length=1264354417, 
> cur=0000000248/info:data/1335463994457/Put/vlen=1000/ts=0]
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:135)
>       at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:95)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:368)
>       at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:127)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3323)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3279)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3296)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2393)
>       at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>       at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
> Caused by: java.io.IOException: Invalid HFile block magic: 
> \xEC\xD5\x9D\xB4\xC2bfo
>       at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
>       at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.<init>(HFileBlock.java:254)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1779)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:327)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:555)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:651)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:130)
>       ... 12 more
> Thu Apr 26 11:19:19 PDT 2012, 
> org.apache.hadoop.hbase.client.ScannerCallable@190a621a, java.io.IOException: 
> java.io.IOException: java.lang.IllegalArgumentException
>       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1132)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1121)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2420)
>       at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>       at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
> Caused by: java.lang.IllegalArgumentException
>       at java.nio.Buffer.position(Buffer.java:216)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:630)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:130)
>       at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:95)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:406)
>       at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:127)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3323)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3279)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3296)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2393)
>       ... 5 more
> On latest 0.94 branch I spun up a new standalone hbase.  Then I started a 
> performance evaluation run hbase/bin/hbase 
> org.apache.hadoop.hbase.PerformanceEvaluation --nomapred randomWrite 10
> after that completed I tried a scan of TestTable.  The scan got the above 
> error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to