[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

2014-10-03 Thread Yuliang Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158077#comment-14158077
 ] 

Yuliang Jin commented on HBASE-11625:
-

Thanks for your reply. We are currently using

{noformat}
java version "1.6.0_37"
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)
{noformat}

and

{noformat}
Hadoop 2.0.0-cdh4.3.0
HBase 0.94.6-cdh4.3.0
{noformat}

> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.21, 0.98.4, 0.98.5
>Reporter: qian wang
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it 
> could happen file corruption but it only can switch to hdfs checksum 
> inputstream till validateBlockChecksum(). If the datablock's header corrupted 
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" 
> and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

2014-08-24 Thread Yuliang Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14108823#comment-14108823
 ] 

Yuliang Jin commented on HBASE-11625:
-

Correction: the above exception was not thrown at the 'checkAndPut()' call, but 
we did seen similar stack trace before, when it 'checkAndPut()' data to another 
table in the same job.

> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.21, 0.98.4, 0.98.5
>Reporter: qian wang
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it 
> could happen file corruption but it only can switch to hdfs checksum 
> inputstream till validateBlockChecksum(). If the datablock's header corrupted 
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" 
> and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

2014-08-24 Thread Yuliang Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14108789#comment-14108789
 ] 

Yuliang Jin commented on HBASE-11625:
-

We have encountered this issue this morning in a job which utilized frequently 
'checkAndPut()' method to put data to HBase (Version 0.94.6-cdh4.3.0) directly, 
and the stack trace says:

{noformat}
Mon Aug 25 05:04:06 CST 2014, org.apache.hadoop.hbase.client.HTable$3@30518bfc, 
java.io.IOException: java.io.IOException: Could not reseek 
StoreFileScanner[HFileScanner for reader 
reader=hdfs://dn:8020/hbase/.../.../.../.../, compression=snappy, 
cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] 
[cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] 
[cacheEvictOnClose=false] [cacheCompressed=false], 
firstKey=.../.../139476240/Put, lastKey=.../.../1381640043000/Put, 
avgKeyLen=83, avgValueLen=14, entries=323120857, length=6150040087, 
cur=.../.../140887440/Maximum/vlen=0/ts=0] to key 
.../.../LATEST_TIMESTAMP/Maximum/vlen=0/ts=0
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:172)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:349)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:355)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:277)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:543)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:411)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:143)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3867)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3939)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3810)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3791)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3834)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4760)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4733)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2072)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1428)
Caused by: java.io.IOException: Failed to read compressed block at 5908855614, 
onDiskSizeWithoutHeader=2995, preReadHeaderSize=0, header.length=3028, header 
bytes: \x00\x9EY\x03ld017766ac516715d3925db24b473
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1871)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1703)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:338)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:480)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:530)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:236)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:161)
... 20 more
Caused by: java.io.IOException: Invalid HFile block magic: \x00\x9EY\x03ld01
at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:153)
at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:164)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:256)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1867)
... 27 more
{noformat}

A major compaction on the problematic region fixed the problem.

> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
>