Now I cannot see how having one context per table space would have a
significant negative performance impact.

The 'dirty data' etc. limits are global, not per block device. By having
several contexts with unflushed dirty data the total amount of dirty
data in the kernel increases.

Possibly, but how much? Do you have experimental data to back up that this is really an issue?

We are talking about 32 (context size) * #table spaces * 8KB buffers = 4MB of dirty buffers to manage for 16 table spaces, I do not see that as a major issue for the kernel.

More thoughts about your theoretical argument:

To complete the argument, the 4MB is just a worst case scenario, in reality flushing the different context would be randomized over time, so the frequency of flushing a context would be exactly the same in both cases (shared or per table space context) if the checkpoints are the same size, just that with shared table space each flushing potentially targets all tablespace with a few pages, while with the other version each flushing targets one table space only.

So my handwaving analysis is that the flow of dirty buffers is the same with both approaches, but for the shared version buffers are more equaly distributed on table spaces, hence reducing sequential write effectiveness, and for the other the dirty buffers are grouped more clearly per table space, so it should get better sequential write performance.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to