Tom Lane <t...@sss.pgh.pa.us> wrote: > Robert Haas <robertmh...@gmail.com> writes: >> An obvious problem is that, if the abort rate is significantly >> different from zero, and especially if the aborts are randomly >> mixed in with commits rather than clustered together in small >> portions of the XID space, the CLOG rollup data would become >> useless. > > Yeah, I'm afraid that with N large enough to provide useful > acceleration, the cases where you'd actually get a win would be > too thin on the ground to make it worth the trouble. Just to get a real-life data point, I check the pg_clog directory for Milwaukee County Circuit Courts. They have about 300 OLTP users, plus replication feeds to the central servers. Looking at the now-present files, there are 19,104 blocks of 256 bytes (which should support N of 1024, per Robert's example). Of those, 12,644 (just over 66%) contain 256 bytes of hex 55. "Last modified" dates on the files go back to the 4th of October, so this represents roughly three months worth of real-life transactions. -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers