On Thu, Apr 27, 2023 at 11:24 AM Laurenz Albe <laurenz.a...@cybertec.at>
wrote:

> On Thu, 2023-04-27 at 14:48 +0530, Tushar Takate wrote:
> > Does PostgreSQL support in-transit compression for a client connection?
>
> No, not any more.
>

On a related but different subject, as someone who must store ZLIB (from
ZIP files)
and sometimes LZ4 compressed `bytea` values, I often find it's a shame that
I have
to decompress them, send them over the wire uncompressed, to have the
PostgreSQL
backend recompress them when TOAST'ed. That's a waste of CPU and IO
bandwidth...

I wish there was a way to tell the backend via libpq and the v3 (or later)
protocol:
Here's the XYZ compressed value, with this uncompressed size and checksum
(depending on the format used / expected), and skip the
decompression/re-compression
and fatter bandwidth, to store them as-is (in the usual 2K TOAST chunks).

I know this is unlikely to happen, for several reasons. Still, I thought
I'd throw it out there.

PS: BTW, in my testing, on-the-wire compression is rarely beneficial IMHO.
I tested the
break-even bandwidth point in the (industry-specific) client-server
protocol I worked on,
which optionally supports compression, and those bandwidths were quite low.
The CPU cost of
ZLib (~ 4x compression) and even the faster LZ4 (~ 2x compression) and
decompression
at the other end, are high enough that you need quite low bandwidth to
recoup them on IO. FWIW.

Reply via email to