bq. Caused by: java.io.IOException: Invalid HFile block magic:
\x00\x00\x00\x00\x00\x00\x00\x00

Looks like you have some corrupted HFile(s) in your cluster - which should
be fixed first.

Which hbase release are you using ?
Do you use data block encoding ?

You can use http://hbase.apache.org/book.html#_hfile_tool to do some
investigation.

Cheers

On Mon, May 18, 2015 at 6:19 PM, Fang, Mike <chuf...@paypal.com> wrote:

>  Hi Ted,
>
>
>
> Thanks for your information.
>
> My application queries the HBase, and for some of the queries it just hang
> there and throw exception after several minutes (5-8minutes). As a
> workaround, I try to set the timeout to a shorter time, so my app won’t
> hang for minutes but for several seconds.  I tried to set both the time out
> to 1000 (1s). but it still hang for several minutes.Is this expected?
>
>
>
> Appreciate it if you know how I could fix the exception.
>
>
>
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException):
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader
> reader=hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/125acceb75d84724a089701c590a4d3d,
> compression=snappy, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=addrv#34005240#US,_28409,_822|addre/F|rval#null|cust#1158923121468951849|addre#1095283883|1/EOUT_T_SRD:~/1430982000000/Put,
> lastKey=addrv#38035AC7#US,_60449,_4684|addre/F|rval#null|cust#1335211720509289817|addre#697997140|1/EOUT_T_SRD:~/1430982000000/Put,
> avgKeyLen=122, avgValueLen=187, entries=105492830, length=6880313695,
> cur=null] to key addrv#34B97AEC#FR,_06110,_41 route des
> breguieres|addre/F|rval#/EOUT_T_SRD:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
>
>         at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:165)
>
>         at
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
>
>         at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:176)
>
>         at
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:3716)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3090)
>
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
>
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>
>         at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>
>         at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>
>         at
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>
>         at java.lang.Thread.run(Thread.java:724)
>
> Caused by: java.io.IOException: Failed to read compressed block at
> 1253175503, onDiskSizeWithoutHeader=66428, preReadHeaderSize=33,
> header.length=33, header bytes:
> DATABLKE\x00\x00&3\x00\x00\xC3\xC9\x00\x00\x00\x01r\xC4-\xDF\x01\x00\x00@
> \x00\x00\x00&P
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1451)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:494)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
>
>         at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
>
>         at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
>
>         ... 15 more
>
> Caused by: java.io.IOException: Invalid HFile block magic:
> \x00\x00\x00\x00\x00\x00\x00\x00
>
>         at
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
>
>         at
> org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.<init>(HFileBlock.java:239)
>
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1448
>
>
>
> Thanks,
>
> Mike
>
> *From:* Ted Yu [mailto:yuzhih...@gmail.com]
> *Sent:* Monday, May 18, 2015 11:55 PM
> *To:* user@hbase.apache.org
> *Cc:* Fang, Mike; Dai, Kevin
> *Subject:* Re: How to set Timeout for get/scan operations without
> impacting others
>
>
>
> hbase.client.operation.timeout is used by HBaseAdmin operations, by 
> RegionReplicaFlushHandler
> and by various HTable operations (including Get).
>
>
>
> hbase.rpc.timeout is for the RPC layer to define how long HBase client
> applications take for a remote call to time out. It uses pings to check
> connections but will eventually throw a TimeoutException.
>
>
>
> FYI
>
>
>
> On Sun, May 17, 2015 at 11:11 PM, Jianshi Huang <jianshi.hu...@gmail.com>
> wrote:
>
> Hi,
>
> I need to set tight timeout for get/scan operations and I think HBase
> Client already support it.
>
> I found three related keys:
>
> - hbase.client.operation.timeout
> - hbase.rpc.timeout
> - hbase.client.retries.number
>
> What's the difference between hbase.client.operation.timeout and
> hbase.rpc.timeout?
> My understanding is that hbase.rpc.timeout has larger scope than hbase.
> client.operation.timeout, so setting hbase.client.operation.timeout  is
> safer. Am I correct?
>
> And any other property keys I can uses?
>
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/
>
>
>

Reply via email to