Hi all,

I'm trying to do bulk loading into a table with snappy compression enabled and 
I'm getting an exception complaining about missing native snappy library, 
namely:

12/01/09 11:16:53 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.io.IOException: java.lang.RuntimeException: 
native snappy library not available
        at 
org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:89)

First, to be clear, everything in this chain works fine if I don't use 
compression, and using hbase shell to 'put' to the compression-enable table 
also works fine.

Here's what I'm doing:
- use importtsv to generate the hfiles.  I'm passing -Dhfile.compression=snappy 
on the command line, as per a mailing list email from lars g i found while 
googling.  The import runs without errors, but I don't know how to test whether 
the hfiles are actually compressed.
- use completebulkload to move the hfiles into the cluster.  This is where I 
get the exception.  I'm running the command from my os x workstation, targeting 
a remote hdfs and hbase (both running on the same cluster).

My environment:
- hbase/hadoop are cdh3u2, fully distributed
- workstation is os x, 10.6

It seems really weird that compression (native compression even moreso) should 
be required by a command that is in theory moving files from one place on a 
remote filesystem to another.  Any light shed would be appreciated.

Thanks,
Oliver
--
Oliver Meyn
Software Developer
Global Biodiversity Information Facility (GBIF)
+45 35 32 15 12
http://www.gbif.org

Reply via email to