Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "UsingLzoCompression" page has been changed by DougMeil:
http://wiki.apache.org/hadoop/UsingLzoCompression?action=diff&rev1=24&rev2=25

Comment:
Per stack, changing the repo to Todd's version of LZO

  
  This distro doesn't contain all bug fixes (such as when LZO header or block 
header data falls on read boundary).
  
- Please get latest distro with all fixes from 
http://github.com/kevinweil/hadoop-lzo
+ Please get latest distro with all fixes from 
https://github.com/toddlipcon/hadoop-lzo
  
  == Why compression? ==
  By enabling compression, the store file (HFile) will use a compression 
algorithm on blocks as they are written (during flushes and compactions) and 
thus must be decompressed when reading.

Reply via email to