On Tue, Feb 2, 2016 at 9:19 PM, Andres Freund <and...@anarazel.de> wrote: > > On 2016-01-28 16:53:08 +0530, Dilip Kumar wrote: > > test_script: > > ------------ > > ./psql -d postgres -c "truncate table data" > > ./psql -d postgres -c "checkpoint" > > ./pgbench -f copy_script -T 120 -c$ -j$ postgres > > > > Shared Buffer 48GB > > Table: Unlogged Table > > ench -c$ -j$ -f -M Prepared postgres > > > > Clients Base patch > > 1 178 180 > > 2 337 338 > > 4 265 601 > > 8 167 805 > > Could you also measure how this behaves for an INSERT instead of a COPY > workload? >
I think such a test will be useful. > > I'm doubtful that anything that does the victim buffer search while > holding the extension lock will actually scale in a wide range of > scenarios. > I think the problem for victim buffer could be visible if the blocks are dirty and it needs to write the dirty buffer and especially as the patch is written where after acquiring the extension lock, it again tries to extend the relation without checking if it can get a page with space from FSM. It seems to me that we should re-check the availability of page because while one backend is waiting on extension lock, other backend might have added pages. To re-check the availability we might want to use something similar to LWLockAcquireOrWait() semantics as used during WAL writing. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com