On Sat, Apr 9, 2016 at 11:24 AM, Alexander Korotkov < a.korot...@postgrespro.ru> wrote:
> On Fri, Apr 8, 2016 at 10:19 PM, Alexander Korotkov < > a.korot...@postgrespro.ru> wrote: > >> On Fri, Apr 8, 2016 at 7:39 PM, Andres Freund <and...@anarazel.de> wrote: >> >>> As you can see in >>> >> >>> http://archives.postgresql.org/message-id/CA%2BTgmoaeRbN%3DZ4oWENLvgGLeHEvGZ_S_Z3KGrdScyKiSvNt3oA%40mail.gmail.com >>> I'm planning to apply this sometime this weekend, after running some >>> tests and going over the patch again. >>> >>> Any chance you could have a look over this? >>> >> >> I took a look at this. Changes you made look good for me. >> I also run test on 4x18 Intel server. >> > > On the top of current master results are following: > > clients TPS > 1 12562 > 2 25604 > 4 52661 > 8 103209 > 10 128599 > 20 256872 > 30 365718 > 40 432749 > 50 513528 > 60 684943 > 70 696050 > 80 923350 > 90 1119776 > 100 1208027 > 110 1229429 > 120 1163356 > 130 1107924 > 140 1084344 > 150 1014064 > 160 961730 > 170 980743 > 180 968419 > > The results are quite discouraging because previously we had about 1.5M > TPS in the peak while we have only about 1.2M now. I found that it's not > related to the changes you made in the patch, but it's related to 5364b357 > "Increase maximum number of clog buffers". I'm making same benchmark with > 5364b357 reverted. > There are results with 5364b357 reverted. clients TPS 1 12980 2 27105 4 51969 8 105507 10 132811 20 256888 30 368573 40 467605 50 544231 60 590898 70 799094 80 967569 90 1211662 100 1352427 110 1432561 120 1480324 130 1486624 140 1492092 150 1461681 160 1426733 170 1409081 180 1366199 It's much closer to what we had before. ------ Alexander Korotkov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company