On Thu, 5 Jul 2007, Heikki Linnakangas wrote:

It looks like Tom's idea is not a winner; it leads to more writes than necessary.

What I came away with as the core of Tom's idea is that the cleaning/LRU writer shouldn't ever scan the same section of the buffer cache twice, because anything that resulted in a new dirty buffer will be unwritable by it until the clock sweep passes over it. I never took that to mean that idea necessarily had to be implemented as "trying to aggressively keep all pages with usage_count=0 clean".

I've been making slow progress on this myself, and the question I've been trying to answer is whether this fundamental idea really matters or not. One clear benefit of that alternate implementation should allow is setting a lower value for the interval without being as concerned that you're wasting resources by doing so, which I've found to a problem with the current implementation--it will consume a lot of CPU scanning the same section right now if you lower that too much.

As far as your results, first off I'm really glad to see someone else comparing checkpoint/backend/bgwriter writes the same I've been doing so I finally have someone else's results to compare against. I expect that the optimal approach here is a hybrid one that structures scanning the buffer cache the new way Tom suggests, but limits the number of writes to "just enough". I happen to be fond of the "just enough" computation based on a weighted moving average I wrote before, but there's certainly room for multiple implementations of that part of the code to evolve.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to