You have a problem with your environment:

Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.
NativeCodeLoader.buildSupportsSnappy()Z

Fix your native Hadoop libraries, or don't use Snappy.

This is not related to encryption.


On Fri, Aug 1, 2014 at 7:12 AM, Shankar hiremath <
shankar.hirem...@huawei.com> wrote:

> When I am reading the Hfile using the './hbase
> org.apache.hadoop.hbase.io.hfile.Hfile' tool (the HFile which is compressed
> with SNAPPY & encrypted with AES)
> I am getting the below error "Problem reading HFile Trailer by using HFile
> tool"
>
> Is there any problem in the usage of the below command, or is it a bug.
>
>
> ------------------------------------------------------------------------------------------
>
> Version details: Hadoop 2.4.1, HBase 0.98.3
>
> Configuration Details: (enabled Hfile and Wal encryption AES, as below)
> hfile.format.version=3
>
> hbase.crypto.keyprovider=org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider
> hbase.crypto.keyprovider.parameters=
> jceks:///opt/shankar1/kdc_keytab/hbase.jks?password=shankar@234
>
> We created a table with SNAPPY  compression
> >       Create 't3', {NAME => 'cf1', COMPRESSION => 'SNAPPY'}
> >       Put 't3','r1','cf1:a','1000'
> >       Flush 't3'
>
> shankar1@host1:~/DataSight/hbase/bin> ./hbase
> org.apache.hadoop.hbase.io.hfile.HFile -v -f
> hdfs://host1:65110/hbase/data/default/t3/337d2996bed579340a702feaa3d3f165/cf1/5817635667d7457989b6d0b0be25dbc4
> 2014-08-01 19:18:28,368 INFO  [main] Configuration.deprecation:
> hadoop.native.lib is deprecated. Instead, use io.native.lib.available
> 2014-08-01 19:18:28,504 INFO  [main] util.ChecksumType: Checksum using
> org.apache.hadoop.util.PureJavaCrc32
> 2014-08-01 19:18:28,506 INFO  [main] util.ChecksumType: Checksum can use
> org.apache.hadoop.util.PureJavaCrc32C
> 2014-08-01 19:18:28,739 WARN  [main] util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
> 2014-08-01 19:18:29,082 INFO  [main] hdfs.DFSClient: Set
> dfs.client.block.write.replace-datanode-on-failure.replication to 0
> 2014-08-01 19:18:29,406 INFO  [main] Configuration.deprecation:
> fs.default.name is deprecated. Instead, use fs.defaultFS
> Scanning ->
> hdfs://host1:65110/hbase/data/default/t3/337d2996bed579340a702feaa3d3f165/cf1/5817635667d7457989b6d0b0be25dbc4
> 2014-08-01 19:18:29,409 INFO  [main] hdfs.DFSClient: Set
> dfs.client.block.write.replace-datanode-on-failure.replication to 0
> INFO: Watching file:/opt/shankar1/DataSight/hbase/conf/log4j.properties
> for changes with interval : 60000
> 2014-08-01 19:18:29,779 ERROR [main] hfile.HFilePrettyPrinter: Error
> reading
> hdfs://host1:65110/hbase/data/default/t3/337d2996bed579340a702feaa3d3f165/cf1/5817635667d7457989b6d0b0be25dbc4
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading
> HFile Trailer from file
> hdfs://host1:65110/hbase/data/default/t3/337d2996bed579340a702feaa3d3f165/cf1/5817635667d7457989b6d0b0be25dbc4
>         at
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:552)
>         at
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:595)
>         at
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:217)
>         at
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:196)
>         at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:873)
> Caused by: java.lang.UnsatisfiedLinkError:
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
>         at
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
>         at
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
>         at
> org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:190)
>         at
> org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:176)
>         at
> org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getDecompressor(Compression.java:336)
>         at
> org.apache.hadoop.hbase.io.compress.Compression.decompress(Compression.java:433)
>         at
> org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:91)
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1522)
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1151)
>         at
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1159)
>         at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:146)
>         at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:72)
>         at
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:542)
>         ... 4 more
> Scanned kv count -> 0
> shankar1@host1:~/DataSight/hbase/bin>
>
> Thanks
> -Shankar
>
> [X]
> This e-mail and its attachments contain confidential information from
> HUAWEI, which is intended only for the person or entity whose address is
> listed above. Any use of the information contained herein in any way
> (including, but not limited to, total or partial disclosure, reproduction,
> or dissemination) by persons other than the intended recipient(s) is
> prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> [X]
>
>
>
>
>
>


-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)

Reply via email to