On Mon, Oct 10, 2016 at 2:17 AM, Tomas Vondra <tomas.von...@2ndquadrant.com> wrote: > after testing each combination (every ~9 hours). Inspired by Robert's wait > event post a few days ago, I've added wait event sampling so that we can > perform similar analysis. (Neat idea!)
I have done wait event test on for head vs group lock patch. I have used similar script what Robert has mentioned in below thread https://www.postgresql.org/message-id/ca+tgmoav9q5v5zgt3+wp_1tqjt6tgyxrwrdctrrwimc+zy7...@mail.gmail.com Test details and Results: -------------------------------- Machine, POWER, 4 socket machine (machine details are attached in file.) 30-minute pgbench runs with configurations, had max_connections = 200, shared_buffers = 8GB, maintenance_work_mem = 4GB, synchronous_commit =off, checkpoint_timeout = 15min, checkpoint_completion_target = 0.9, log_line_prefix = '%t [%p] max_wal_size = 40GB, log_checkpoints =on. Test1: unlogged table, 192 clients --------------------------------------------- On Head: tps = 44898.862257 (including connections establishing) tps = 44899.761934 (excluding connections establishing) 262092 LWLockNamed | CLogControlLock 224396 | 114510 Lock | transactionid 42908 Client | ClientRead 20610 Lock | tuple 13700 LWLockTranche | buffer_content 3637 2562 LWLockNamed | XidGenLock 2359 LWLockNamed | ProcArrayLock 1037 Lock | extend 948 LWLockTranche | lock_manager 46 LWLockTranche | wal_insert 12 BufferPin | BufferPin 4 LWLockTranche | buffer_mapping With Patch: tps = 77846.622956 (including connections establishing) tps = 77848.234046 (excluding connections establishing) 101832 Lock | transactionid 91358 Client | ClientRead 16691 LWLockNamed | XidGenLock 12467 Lock | tuple 6007 LWLockNamed | CLogControlLock 3640 3531 LWLockNamed | ProcArrayLock 3390 LWLockTranche | lock_manager 2683 Lock | extend 1112 LWLockTranche | buffer_content 72 LWLockTranche | wal_insert 8 LWLockTranche | buffer_mapping 2 LWLockTranche | proc 2 BufferPin | BufferPin Test2: unlogged table, 96 clients ------------------------------------------ On head: tps = 58632.065563 (including connections establishing) tps = 58632.767384 (excluding connections establishing) 77039 LWLockNamed | CLogControlLock 39712 Client | ClientRead 18358 Lock | transactionid 4238 LWLockNamed | XidGenLock 3638 3518 LWLockTranche | buffer_content 2717 LWLockNamed | ProcArrayLock 1410 Lock | tuple 792 Lock | extend 182 LWLockTranche | lock_manager 30 LWLockTranche | wal_insert 3 LWLockTranche | buffer_mapping 1 Tuples only is on. 1 BufferPin | BufferPin With Patch: tps = 75204.166640 (including connections establishing) tps = 75204.922105 (excluding connections establishing) [dilip.kumar@power2 bin]$ cat out_300_96_ul.txt 261917 | 53407 Client | ClientRead 14994 Lock | transactionid 5258 LWLockNamed | XidGenLock 3660 3604 LWLockNamed | ProcArrayLock 2096 LWLockNamed | CLogControlLock 1102 Lock | tuple 823 Lock | extend 481 LWLockTranche | buffer_content 372 LWLockTranche | lock_manager 192 Lock | relation 65 LWLockTranche | wal_insert 6 LWLockTranche | buffer_mapping 1 Tuples only is on. 1 LWLockTranche | proc Test3: unlogged table, 64 clients ------------------------------------------ On Head: tps = 66231.203018 (including connections establishing) tps = 66231.664990 (excluding connections establishing) 43446 Client | ClientRead 6992 LWLockNamed | CLogControlLock 4685 Lock | transactionid 3650 3381 LWLockNamed | ProcArrayLock 810 LWLockNamed | XidGenLock 734 Lock | extend 439 LWLockTranche | buffer_content 247 Lock | tuple 136 LWLockTranche | lock_manager 64 Lock | relation 24 LWLockTranche | wal_insert 2 LWLockTranche | buffer_mapping 1 Tuples only is on. With Patch: tps = 67294.042602 (including connections establishing) tps = 67294.532650 (excluding connections establishing) 28186 Client | ClientRead 3655 1172 LWLockNamed | ProcArrayLock 619 Lock | transactionid 289 LWLockNamed | CLogControlLock 237 Lock | extend 81 LWLockTranche | buffer_content 48 LWLockNamed | XidGenLock 28 LWLockTranche | lock_manager 23 Lock | tuple 6 LWLockTranche | wal_insert Test4: unlogged table, 32 clients Head: tps = 52320.190549 (including connections establishing) tps = 52320.442694 (excluding connections establishing) 28564 Client | ClientRead 3663 1320 LWLockNamed | ProcArrayLock 742 Lock | transactionid 534 LWLockNamed | CLogControlLock 255 Lock | extend 108 LWLockNamed | XidGenLock 81 LWLockTranche | buffer_content 44 LWLockTranche | lock_manager 29 Lock | tuple 6 LWLockTranche | wal_insert 1 Tuples only is on. 1 LWLockTranche | buffer_mapping With Patch: tps = 47505.582315 (including connections establishing) tps = 47505.773351 (excluding connections establishing) 28186 Client | ClientRead 3655 1172 LWLockNamed | ProcArrayLock 619 Lock | transactionid 289 LWLockNamed | CLogControlLock 237 Lock | extend 81 LWLockTranche | buffer_content 48 LWLockNamed | XidGenLock 28 LWLockTranche | lock_manager 23 Lock | tuple 6 LWLockTranche | wal_insert I think at higher client count from client count 96 onwards contention on CLogControlLock is clearly visible and which is completely solved with group lock patch. And at lower client count 32,64 contention on CLogControlLock is not significant hence we can not see any gain with group lock patch. (though we can see some contention on CLogControlLock is reduced at 64 clients.) Note: Here I have taken only one set of reading, and at 32 client my reading shows some regression with group lock patch, which may be run to run variance (because earlier I never saw this regression, I can confirm again with multiple runs). -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com
[@power2 ~]$ uname -mrs Linux 3.10.0-229.14.1.ael7b.ppc64le ppc64le [@power2 ~]$ lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 192 On-line CPU(s) list: 0-191 Thread(s) per core: 8 Core(s) per socket: 1 Socket(s): 24 NUMA node(s): 4 Model: IBM,8286-42A L1d cache: 64K L1i cache: 32K L2 cache: 512K L3 cache: 8192K NUMA node0 CPU(s): 0-47 NUMA node1 CPU(s): 48-95 NUMA node2 CPU(s): 96-143 NUMA node3 CPU(s): 144-191 [@power2 ~]$ nproc --all 192
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers