* Greg Smith: > One idea I was thinking about here was building a little hash table > inside of the fsync absorb code, tracking how many absorb operations > have happened for whatever the most popular relation files are. The > idea is that we might say "use sync_file_range every time <N> calls > for a relation have come in", just to keep from ever accumulating too > many writes to any one file before trying to nudge some of it out of > there. The bat that keeps hitting me in the head here is that right > now, a single fsync might have a full 1GB of writes to flush out, > perhaps because it extended a table and then write more than that to > it. And in everything but a SSD or giant SAN cache situation, 1GB of > I/O is just too much to fsync at a time without the OS choking a > little on it.
Isn't this pretty much like tuning vm.dirty_bytes? We generally set it to pretty low values, and seems to help to smoothen the checkpoints. -- Florian Weimer <fwei...@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers