Re: How to set Timeout for get/scan operations without impacting others

2015-05-18 Thread Ted Yu
hbase.client.operation.timeout is used by HBaseAdmin operations, by
RegionReplicaFlushHandler
and by various HTable operations (including Get).

hbase.rpc.timeout is for the RPC layer to define how long HBase client
applications take for a remote call to time out. It uses pings to check
connections but will eventually throw a TimeoutException.

FYI

On Sun, May 17, 2015 at 11:11 PM, Jianshi Huang 
wrote:

> Hi,
>
> I need to set tight timeout for get/scan operations and I think HBase
> Client already support it.
>
> I found three related keys:
>
> - hbase.client.operation.timeout
> - hbase.rpc.timeout
> - hbase.client.retries.number
>
> What's the difference between hbase.client.operation.timeout and
> hbase.rpc.timeout?
> My understanding is that hbase.rpc.timeout has larger scope than hbase.
> client.operation.timeout, so setting hbase.client.operation.timeout  is
> safer. Am I correct?
>
> And any other property keys I can uses?
>
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/
>


Re: How to set Timeout for get/scan operations without impacting others

2015-05-18 Thread Ted Yu
Exception: Invalid HFile block magic:
> \x00\x00\x00\x00\x00\x00\x00\x00
>
> at
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
>
> at
> org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165)
>
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:239)
>
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1448
>
>
>
> Thanks,
>
> Mike
>
> *From:* Ted Yu [mailto:yuzhih...@gmail.com]
> *Sent:* Monday, May 18, 2015 11:55 PM
> *To:* user@hbase.apache.org
> *Cc:* Fang, Mike; Dai, Kevin
> *Subject:* Re: How to set Timeout for get/scan operations without
> impacting others
>
>
>
> hbase.client.operation.timeout is used by HBaseAdmin operations, by 
> RegionReplicaFlushHandler
> and by various HTable operations (including Get).
>
>
>
> hbase.rpc.timeout is for the RPC layer to define how long HBase client
> applications take for a remote call to time out. It uses pings to check
> connections but will eventually throw a TimeoutException.
>
>
>
> FYI
>
>
>
> On Sun, May 17, 2015 at 11:11 PM, Jianshi Huang 
> wrote:
>
> Hi,
>
> I need to set tight timeout for get/scan operations and I think HBase
> Client already support it.
>
> I found three related keys:
>
> - hbase.client.operation.timeout
> - hbase.rpc.timeout
> - hbase.client.retries.number
>
> What's the difference between hbase.client.operation.timeout and
> hbase.rpc.timeout?
> My understanding is that hbase.rpc.timeout has larger scope than hbase.
> client.operation.timeout, so setting hbase.client.operation.timeout  is
> safer. Am I correct?
>
> And any other property keys I can uses?
>
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/
>
>
>


RE: How to set Timeout for get/scan operations without impacting others

2015-05-18 Thread Fang, Mike
Hi Ted,

Thanks for your information.
My application queries the HBase, and for some of the queries it just hang 
there and throw exception after several minutes (5-8minutes). As a workaround, 
I try to set the timeout to a shorter time, so my app won’t hang for minutes 
but for several seconds.  I tried to set both the time out to 1000 (1s). but it 
still hang for several minutes.Is this expected?

Appreciate it if you know how I could fix the exception.

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader 
reader=hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/125acceb75d84724a089701c590a4d3d,
 compression=snappy, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] 
[cacheEvictOnClose=false] [cacheCompressed=false], 
firstKey=addrv#34005240#US,_28409,_822|addre/F|rval#null|cust#1158923121468951849|addre#1095283883|1/EOUT_T_SRD:~/143098200/Put,
 
lastKey=addrv#38035AC7#US,_60449,_4684|addre/F|rval#null|cust#1335211720509289817|addre#697997140|1/EOUT_T_SRD:~/143098200/Put,
 avgKeyLen=122, avgValueLen=187, entries=105492830, length=6880313695, 
cur=null] to key addrv#34B97AEC#FR,_06110,_41 route des 
breguieres|addre/F|rval#/EOUT_T_SRD:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:165)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:176)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3716)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3090)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Failed to read compressed block at 1253175503, 
onDiskSizeWithoutHeader=66428, preReadHeaderSize=33, header.length=33, header 
bytes: 
DATABLKE\x00\x00&3\x00\x00\xC3\xC9\x00\x00\x00\x01r\xC4-\xDF\x01\x00\x00@\x00\x00\x00&P
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1451)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:494)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:515)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:238)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
... 15 more
Caused by: java.io.IOException: Invalid HFile block magic: 
\x00\x00\x00\x00\x00\x00\x00\x00
at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:239)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1448

Thanks,
Mike
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Monday, May 18, 2015 11:55 PM
To: user@hbase.apache.org
Cc: Fang, Mike; Dai, Kevin
Subject: Re: How to set Timeout for get/scan operations without impacting others

hbase.client.operation.timeout is used by HBaseAdmin operations, by 
RegionReplicaFlushHandler and by various HTable operations (including Get).

hbase.rpc.timeout is for the RPC layer to define how long HBase client 
applications take for a remote call to time out. It uses pings to check 
connections 

RE: How to set Timeout for get/scan operations without impacting others

2015-05-18 Thread Fang, Mike
Hi Ted,

Thanks.
Hbase version is: HBase 0.98.0.2.1.2.0-402-hadoop2
Data block encoding: DATA_BLOCK_ENCODING => 'DIFF'

I tried to run the hfile tool to scan, and it looks good though:

hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f 
hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/10afed9b44024d02992cfd0409686658
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8
2015-05-18 18:34:33,406 INFO  [main] Configuration.deprecation: fs.default.name 
is deprecated. Instead, use fs.defaultFS
Scanning -> 
hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/10afed9b44024d02992cfd0409686658
2015-05-18 18:34:33,800 INFO  [main] hfile.CacheConfig: Allocating 
LruBlockCache with maximum size 386.7 M
2015-05-18 18:34:34,032 INFO  [main] compress.CodecPool: Got brand-new 
decompressor [.snappy]
Scanned kv count -> 13387493

Any thought or suggestion?
Also if it is corrupted file, do you have guidance/link showing how to fix that?

Thanks,
Mike
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, May 19, 2015 9:29 AM
To: Fang, Mike
Cc: user@hbase.apache.org; Dai, Kevin; Huang, Jianshi
Subject: Re: How to set Timeout for get/scan operations without impacting others

bq. Caused by: java.io.IOException: Invalid HFile block magic: 
\x00\x00\x00\x00\x00\x00\x00\x00

Looks like you have some corrupted HFile(s) in your cluster - which should be 
fixed first.

Which hbase release are you using ?
Do you use data block encoding ?

You can use http://hbase.apache.org/book.html#_hfile_tool to do some 
investigation.

Cheers

On Mon, May 18, 2015 at 6:19 PM, Fang, Mike 
mailto:chuf...@paypal.com>> wrote:
Hi Ted,

Thanks for your information.
My application queries the HBase, and for some of the queries it just hang 
there and throw exception after several minutes (5-8minutes). As a workaround, 
I try to set the timeout to a shorter time, so my app won’t hang for minutes 
but for several seconds.  I tried to set both the time out to 1000 (1s). but it 
still hang for several minutes.Is this expected?

Appreciate it if you know how I could fix the exception.

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader 
reader=hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/125acceb75d84724a089701c590a4d3d,
 compression=snappy, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] 
[cacheEvictOnClose=false] [cacheCompressed=false], 
firstKey=addrv#34005240#US,_28409,_822|addre/F|rval#null|cust#1158923121468951849|addre#1095283883|1/EOUT_T_SRD:~/143098200/Put,
 
lastKey=addrv#38035AC7#US,_60449,_4684|addre/F|rval#null|cust#1335211720509289817|addre#697997140|1/EOUT_T_SRD:~/143098200/Put,
 avgKeyLen=122, avgValueLen=187, entries=105492830, length=6880313695, 
cur=null] to key addrv#34B97AEC#FR,_06110,_41 route des 
breguieres|addre/F|rval#/EOUT_T_SRD:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:165)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:176)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3716)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3090)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Failed to read compressed block at 1253175503, 
onDiskSizeWithoutHeader=66428, preReadHeaderSize=33, header.length=33, header 
bytes: 
DATABLKE\x00\x00&3\x00\x00\xC3\xC9\x00\x00\x00\x01r\xC4-\xDF\x01\x00\x00@\x00\x00\x00&P
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1451)
at 
org.apache.hadoop.hba

Re: How to set Timeout for get/scan operations without impacting others

2015-05-27 Thread Ted Yu
Mike:
Please take a look at HBASE-13783

FYI

On Mon, May 18, 2015 at 6:44 PM, Fang, Mike  wrote:

>  Hi Ted,
>
>
>
> Thanks.
>
> Hbase version is: HBase 0.98.0.2.1.2.0-402-hadoop2
>
> Data block encoding: DATA_BLOCK_ENCODING => 'DIFF'
>
>
>
> I tried to run the hfile tool to scan, and it looks good though:
>
>
>
> hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f
> hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/10afed9b44024d02992cfd0409686658
>
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8
>
> 2015-05-18 18:34:33,406 INFO  [main] Configuration.deprecation:
> fs.default.name is deprecated. Instead, use fs.defaultFS
>
> Scanning ->
> hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/10afed9b44024d02992cfd0409686658
>
> 2015-05-18 18:34:33,800 INFO  [main] hfile.CacheConfig: Allocating
> LruBlockCache with maximum size 386.7 M
>
> 2015-05-18 18:34:34,032 INFO  [main] compress.CodecPool: Got brand-new
> decompressor [.snappy]
>
> *Scanned kv count -> 13387493*
>
>
>
> Any thought or suggestion?
>
> Also if it is corrupted file, do you have guidance/link showing how to fix
> that?
>
>
>
> Thanks,
>
> Mike
>
> *From:* Ted Yu [mailto:yuzhih...@gmail.com]
> *Sent:* Tuesday, May 19, 2015 9:29 AM
> *To:* Fang, Mike
> *Cc:* user@hbase.apache.org; Dai, Kevin; Huang, Jianshi
>
> *Subject:* Re: How to set Timeout for get/scan operations without
> impacting others
>
>
>
> bq. Caused by: java.io.IOException: Invalid HFile block magic:
> \x00\x00\x00\x00\x00\x00\x00\x00
>
>
>
> Looks like you have some corrupted HFile(s) in your cluster - which should
> be fixed first.
>
>
>
> Which hbase release are you using ?
>
> Do you use data block encoding ?
>
>
>
> You can use http://hbase.apache.org/book.html#_hfile_tool to do some
> investigation.
>
>
>
> Cheers
>
>
>
> On Mon, May 18, 2015 at 6:19 PM, Fang, Mike  wrote:
>
> Hi Ted,
>
>
>
> Thanks for your information.
>
> My application queries the HBase, and for some of the queries it just hang
> there and throw exception after several minutes (5-8minutes). As a
> workaround, I try to set the timeout to a shorter time, so my app won’t
> hang for minutes but for several seconds.  I tried to set both the time out
> to 1000 (1s). but it still hang for several minutes.Is this expected?
>
>
>
> Appreciate it if you know how I could fix the exception.
>
>
>
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException):
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader
> reader=hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/125acceb75d84724a089701c590a4d3d,
> compression=snappy, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=addrv#34005240#US,_28409,_822|addre/F|rval#null|cust#1158923121468951849|addre#1095283883|1/EOUT_T_SRD:~/143098200/Put,
> lastKey=addrv#38035AC7#US,_60449,_4684|addre/F|rval#null|cust#1335211720509289817|addre#697997140|1/EOUT_T_SRD:~/143098200/Put,
> avgKeyLen=122, avgValueLen=187, entries=105492830, length=6880313695,
> cur=null] to key addrv#34B97AEC#FR,_06110,_41 route des
> breguieres|addre/F|rval#/EOUT_T_SRD:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
>
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:165)
>
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
>
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:176)
>
> at
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3716)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3090)
>
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>
> at org.apache.hadoop.hbas

RE: How to set Timeout for get/scan operations without impacting others

2015-05-27 Thread Fang, Mike
Thanks Ted.

From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Thursday, May 28, 2015 6:12 AM
To: Fang, Mike
Cc: user@hbase.apache.org; Dai, Kevin; Huang, Jianshi
Subject: Re: How to set Timeout for get/scan operations without impacting others

Mike:
Please take a look at HBASE-13783

FYI

On Mon, May 18, 2015 at 6:44 PM, Fang, Mike 
mailto:chuf...@paypal.com>> wrote:
Hi Ted,

Thanks.
Hbase version is: HBase 0.98.0.2.1.2.0-402-hadoop2
Data block encoding: DATA_BLOCK_ENCODING => 'DIFF'

I tried to run the hfile tool to scan, and it looks good though:

hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f 
hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/10afed9b44024d02992cfd0409686658
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8
2015-05-18 18:34:33,406 INFO  [main] Configuration.deprecation: 
fs.default.name<http://fs.default.name> is deprecated. Instead, use fs.defaultFS
Scanning -> 
hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/10afed9b44024d02992cfd0409686658
2015-05-18 18:34:33,800 INFO  [main] hfile.CacheConfig: Allocating 
LruBlockCache with maximum size 386.7 M
2015-05-18 18:34:34,032 INFO  [main] compress.CodecPool: Got brand-new 
decompressor [.snappy]
Scanned kv count -> 13387493

Any thought or suggestion?
Also if it is corrupted file, do you have guidance/link showing how to fix that?

Thanks,
Mike
From: Ted Yu [mailto:yuzhih...@gmail.com<mailto:yuzhih...@gmail.com>]
Sent: Tuesday, May 19, 2015 9:29 AM
To: Fang, Mike
Cc: user@hbase.apache.org<mailto:user@hbase.apache.org>; Dai, Kevin; Huang, 
Jianshi

Subject: Re: How to set Timeout for get/scan operations without impacting others

bq. Caused by: java.io.IOException: Invalid HFile block magic: 
\x00\x00\x00\x00\x00\x00\x00\x00

Looks like you have some corrupted HFile(s) in your cluster - which should be 
fixed first.

Which hbase release are you using ?
Do you use data block encoding ?

You can use http://hbase.apache.org/book.html#_hfile_tool to do some 
investigation.

Cheers

On Mon, May 18, 2015 at 6:19 PM, Fang, Mike 
mailto:chuf...@paypal.com>> wrote:
Hi Ted,

Thanks for your information.
My application queries the HBase, and for some of the queries it just hang 
there and throw exception after several minutes (5-8minutes). As a workaround, 
I try to set the timeout to a shorter time, so my app won’t hang for minutes 
but for several seconds.  I tried to set both the time out to 1000 (1s). but it 
still hang for several minutes.Is this expected?

Appreciate it if you know how I could fix the exception.

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader 
reader=hdfs://xxx/hbase/data/data/default/xxx/af7898973c510425fabb7c814ac8ba04/EOUT_T_SRD/125acceb75d84724a089701c590a4d3d,
 compression=snappy, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] 
[cacheEvictOnClose=false] [cacheCompressed=false], 
firstKey=addrv#34005240#US,_28409,_822|addre/F|rval#null|cust#1158923121468951849|addre#1095283883|1/EOUT_T_SRD:~/143098200/Put,
 
lastKey=addrv#38035AC7#US,_60449,_4684|addre/F|rval#null|cust#1335211720509289817|addre#697997140|1/EOUT_T_SRD:~/143098200/Put,
 avgKeyLen=122, avgValueLen=187, entries=105492830, length=6880313695, 
cur=null] to key addrv#34B97AEC#FR,_06110,_41 route des 
breguieres|addre/F|rval#/EOUT_T_SRD:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:165)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:176)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1847)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3716)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1890)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1876)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1853)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3090)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28861)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at 
org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcSc