Pavan Deolasee wrote:
Another source of I/O is perhaps the CLOG read/writes for checking
transaction status. If we are talking about large tables like accounts in
pgbench or customer/stock in DBT2, the tables are vacuumed much later than
the actual UPDATEs. I don't have any numbers to prove yet, but my sense is
that CLOG pages holding the status of many of the transactions might have
been already flushed out of the cache and require an I/O. Since the default
CLOG SLRU buffers is set to 8, there could be severe CLOG SLRU thrashing
during VACUUM as the transaction ids will be all random in a heap page.

8 log pages hold 8*8192*4=262144 transactions. If the active set of transactions is larger than that, the OS cache will probably hold more clog pages. I guess you could end up doing some I/O on clog on a vacuum of a big table, if you have a high transaction rate and vacuum infrequently...

Would it help to set the status of the XMIN/XMAX of tuples early enough such
that the heap page is still in the buffer cache, but late enough such that
the XMIN/XMAX transactions are finished ? How about doing it when the
bgwriter is about to write the page to disk ? Assuming few seconds of life
of a heap page in the buffer cache, hopefully most of the XMIN/XMAX
transactions should have completed and bgwriter can set XMIN(XMAX)_COMMITTED or XMIN(XMAX)_INVALID for most of the tuples in the page. This would save us
CLOG I/Os  later, either during subsequent access to the tuple and/or
vacuum.

Yeah, we could do that. First I'd like to see some more evidence that clog trashing is a problem, though.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to