On Tue, Jan 8, 2013 at 10:20 AM, Robert Haas <robertmh...@gmail.com> wrote: > On Tue, Jan 8, 2013 at 4:04 AM, Takeshi Yamamuro > <yamamuro.take...@lab.ntt.co.jp> wrote: >> Apart from my patch, what I care is that the current one might >> be much slow against I/O. For example, when compressing >> and writing large values, compressing data (20-40MiB/s) might be >> a dragger against writing data in disks (50-80MiB/s). Moreover, >> IMHO modern (and very fast) I/O subsystems such as SSD make a >> bigger issue in this case. > > What about just turning compression off?
I've been relying on compression for some big serialized blob fields for some time now. I bet I'm not alone, lots of people save serialized data to text fields. So rather than removing it, I'd just change the default to off (if that was the decision). However, it might be best to evaluate some of the modern fast compression schemes like snappy/lz4 (250MB/s per core sounds pretty good), and implement pluggable compression schemes instead. Snappy wasn't designed for nothing, it was most likely because it was necessary. Cassandra (just to name a system I'm familiar with) started without compression, and then it was deemed necessary to the point they invested considerable time into it. I've always found the fact that pg does compression of toast tables quite forward-thinking, and I'd say the feature has to remain there, extended and modernized, maybe off by default, but there. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers