Luke Lonergan wrote:
> Jim,
> 
> On 2/26/06 10:37 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> 
> > So the cutover point (on your system with very fast IO) is 4:1
> > compression (is that 20 or 25%?).
> 
> Actually the size of the gzipp'ed binary file on disk was 65MB, compared to
> 177.5MB uncompressed, so the compression ratio is 37% (?), or 2.73:1.

I doubt our algorithm would give the same compression (though I haven't
really measured it).  The LZ implementation we use is supposed to have
lightning speed at the cost of a not-so-good compression ratio.

> No, unfortunately not.  O'Reilly's jobs data have 65K rows, so that would
> work.  How do we implement LZW compression on toasted fields?  I've never
> done it!

See src/backend/utils/adt/pg_lzcompress.c

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to