Not to nitpick, but dedup isn't really compression in one significant
respect.  e.g. you can have 3 copies of the same data chunk and it is only
stored as one (effectively a compression ratio of 4:1), even if the data in
question is uncompressible (due to already being compressed.)  

-----Original Message-----
From: Edward Ned Harvey [mailto:[email protected]] 
Sent: Wednesday, November 02, 2011 9:36 AM
To: [email protected]
Subject: Re: [OpenIndiana-discuss] Fwd: poor zfs compression ratio

> From: Krishna PMV [mailto:[email protected]]
> 
> Resending as my previous email didn`t get through the list.  Can someone
> please advice how we can improve compression ratio here? Thanks!

Nothing you can do, except store the mail in compressed files like xz or
whatever.

All general-purpose compression algorithms rely on one basic principle:
Reducing or remapping repeated data patterns.  If you're getting weak
compression ratios, it means your data is already compressed, or not
massively repeated.  Either way, you're already doing the best you can do.

Note:  Dedup is in fact a compression algorithm, but it's being applied
pool-wide instead of at the block level, so it deserves a different name.
We call it dedup instead of compression.


_______________________________________________
OpenIndiana-discuss mailing list
[email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss


_______________________________________________
OpenIndiana-discuss mailing list
[email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to