io.compression.codecs was the clue in this case, I had set
mapred.compress.map.output , but not that one. Now I have done so, and
the error is gone.
Thanks!
Regards,
Bas
On Sun, Apr 15, 2012 at 8:19 PM, Edward Capriolo wrote:
> You need three things. 1 install snappy to a place the system can
You need three things. 1 install snappy to a place the system can pick
it out automatically or add it to your java.library.path
Then add the full name of the codec to io.compression.codecs.
hive> set io.compression.codecs;
io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec,org.apache.h
Hello Jay,
My input is just a csv file (created it myself), so I am sure it is
not compressed in any way. Also, the same input works when I use the
standalone example (using the hadoop executable in the bin folder).
When I try to integrate it in a larger java program it fails :(
Regards,
Ba
That is odd why would it crash when your m/r job did not rely on snappy?
One possibility : Maybe because your input is snappy compressed, Hadoop is
detecting that compression, and trying to use the snappy codec to decompress.?
Jay Vyas
MMSB
UCHC
On Apr 15, 2012, at 5:08 AM, Bas Hickendorf
Hello John,
I did restart them (in fact, I did a full reboot of the machine). The
error is still there.
I guess my question is: is it expected that Hadoop needs to do
something with the Snappycodec when mapred.compress.map.output is set
to false?
Regards,
Bas
On Sun, Apr 15, 2012 at 12:04 PM,
Can you restart tasktrackers once and run the job again? It refreshes the
class path.
On Sun, Apr 15, 2012 at 11:58 AM, Bas Hickendorff
wrote:
> Thanks.
>
> The native snappy libraries I have installed. However, I use the
> normal jars that you get when downloading Hadoop, I am not compiling
> Ha
Thanks.
The native snappy libraries I have installed. However, I use the
normal jars that you get when downloading Hadoop, I am not compiling
Hadoop myself.
I do not want to use the snappy codec (I don't care about compression
at the moment), but it seems it is needed anyway? I added this to the
Hadoop has integrated snappy via installed native libraries instead of
snappy-java.jar (ref https://issues.apache.org/jira/browse/HADOOP-7206)
- You need to have the snappy system libraries (snappy and snappy-devel)
installed before you compile hadoop. (RPMs are available on the web,
http://pk
Hello,
When I start a map-reduce job, it starts, and after a short while,
fails with the error below (SnappyCodec not found).
I am currently starting the job from other Java code (so the Hadoop
executable in the bin directory is not used anymore), but in principle
this seems to work (in the admin