On 1/18/12 4:18 PM, Jim Nasby wrote:
What about doing away with all the arbitrary numbers completely, and just state data rate limits for hit/miss/dirty?
Since many workloads will have a mix of all three, it still seems like there's some need for weighing these individually, even if they each got their own rates. If someone says read=8MB/s and write=4MB/s (the current effective defaults), I doubt they would be happy with seeing 12MB/s happen.
BTW, this is a case where it would be damn handy to know if the miss was really a miss or not... in the case where we're already rate limiting vacuum, could we afford the cost of get_time_of_day() to see if a miss actually did have to come from disk?
We certainly might if it's a system where timing information is reasonably cheap, and measuring that exact area will be easy if the timing test contrib module submitted into this CF gets committed. I could see using that to re-classify some misses as hits if the read returns fast enough.
There's not an obvious way to draw that line though. The "fast=hit" vs. "slow=miss" transition happens at very different place on SSD vs. regular disks, as the simplest example. I don't see any way to wander down this path that doesn't end up introducing multiple new GUCs, which is the opposite of what I'd hoped to do--which was at worst to keep the same number, but reduce how many were likely to be touched.
-- Greg Smith 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers