Hello
Can someone share some idea what the Hadoop source code of class
org.apache.hadoop.io.compress.BlockDecompressorStream, method
rawReadInt() is trying to do here?
The BlockDecompressorStream class is used for block-based decompression
(e.g. snappy). Each chunk has a header indicating
security package, then only provide the private key to the runtime to decrypt
it.
Yong
From: davidpark...@yahoo.com
To: user@hadoop.apache.org
Subject: RE: Question related to Decompressor interface
Date: Sun, 10 Feb 2013 09:36:40 +0700
I can’t answer your question about the Decompressor interface
@hadoop.apache.org
*Subject:* RE: Question related to Decompressor interface
** **
Hi, Dave:
** **
Thanks for you reply. I am not sure how the EncryptedWritable will work,
can you share more ideas about it?
** **
For example, if I have a text file as my source raw file. Now I
related to Decompressor interface
HI,
Currently I am researching about options of encrypting the data in the
MapReduce, as we plan to use the Amazon EMR or EC2 services for our data.
I am thinking that the compression codec is good place to integrate with the
encryption logic, and I found