Hi,

On 08/29/2017 05:04 AM, Mithun Cy wrote:
Test Setting:
=========
Server configuration:
./postgres -c shared_buffers=8GB -N 300 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c
maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 -c
wal_buffers=256MB &

pgbench configuration:
scale_factor = 300
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared -S  postgres

The machine has 64 cores with this patch I can see server starts
improvement after 64 clients. I have tested up to 256 clients. Which
shows performance improvement nearly max 39% [1]. This is the best
case for the patch where once computed snapshotData is reused further.

The worst case seems to be small, quick write transactions which
frequently invalidate the cached SnapshotData before it is reused by
any next GetSnapshotData call. As of now, I tested simple update
tests: pgbench -M Prepared -N on the same machine with the above
server configuration. I do not see much change in TPS numbers.

All TPS are median of 3 runs
Clients     TPS-With Patch 05   TPS-Base            %Diff
1             752.461117                755.186777          -0.3%
64           32171.296537           31202.153576       +3.1%
128         41059.660769           40061.929658       +2.49%

I will do some profiling and find out why this case is not costing us
some performance due to caching overhead.


I have done a run with this patch on a 2S/28C/56T/256Gb w/ 2 x RAID10 SSD machine.

Both for -M prepared, and -M prepared -S I'm not seeing any improvements (1 to 375 clients); e.g. +-1%.

Although the -M prepared -S case should improve, I'm not sure that the extra overhead in the -M prepared case is worth the added code complexity.

Best regards,
 Jesper


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to