Thank you for looking at the patch. On Fri, 4 Nov 2022 at 04:43, Ankit Kumar Pandey <itsanki...@gmail.com> wrote: > I don't see any performance improvement in tests.
Are you able to share what your test was? In order to see a performance improvement you're likely going to have to obtain a large number of locks in the session so that the local lock table becomes bloated, then continue to run some fast query and observe that LockReleaseAll has become slower as a result of the hash table becoming bloated. Running pgbench running a SELECT on a hash partitioned table with a good number of partitions to look up a single row with -M prepared. The reason this becomes slow is that the planner will try a generic plan on the 6th execution which will lock every partition and bloat the local lock table. From then on it will use a custom plan which only locks a single leaf partition. I just tried the following: $ pgbench -i --partition-method=hash --partitions=1000 postgres Master: $ pgbench -T 60 -S -M prepared postgres | grep tps tps = 21286.172326 (without initial connection time) Patched: $ pgbench -T 60 -S -M prepared postgres | grep tps tps = 23034.063261 (without initial connection time) If I try again with 10,000 partitions, I get: Master: $ pgbench -T 60 -S -M prepared postgres | grep tps tps = 13044.290903 (without initial connection time) Patched: $ pgbench -T 60 -S -M prepared postgres | grep tps tps = 22683.545686 (without initial connection time) David