> 
> On 11 Jan 2016, at 21:40, Jesper Pedersen <jesper.peder...@redhat.com> wrote:
> 
> I have done a run with the patch and it looks really great.
> 
> Attached is the TPS graph - with a 1pc run too - and the perf profile as a 
> flame graph (28C/56T w/ 256Gb mem, 2 x RAID10 SSD).
> 

Thanks for testing and especially for the flame graph. That is somewhat in 
between the cases that I have tested. On commodity server with dual Xeon (6C 
each) 2pc speed is about 80% of 1pc speed, but on 60C/120T system that patch 
didn’t make significant difference because main bottleneck changes from file 
access to locks on array of running global transactions.

How did you generated names for your PREPARE’s? One funny thing that I’ve 
spotted that tx rate increased when i was using incrementing counter as GID 
instead of random string.

And can you also share flame graph for 1pc workload?

> 
> On 11 Jan 2016, at 21:43, Simon Riggs <si...@2ndquadrant.com> wrote:
> 
> Have you measured lwlocking as a problem?
> 


Yes. GXACT locks that wasn’t even in perf top 10 on dual Xeon moves to the 
first places when running on 60 core system. But Jesper’s flame graph on 24 
core system shows different picture.

> On 12 Jan 2016, at 01:24, Andres Freund <and...@anarazel.de> wrote:
> 
> Currently recovery of 2pc often already is a bigger bottleneck than the 
> workload on the master, because replay has to execute the fsyncs implied by 
> statefile  re-creation serially, whereas on the master they'll usually be 
> executed in parallel.

That’s interesting observation. Simon already pointed me to this problem in 2pc 
replay, but I didn’t thought that it is so slow. I’m now working on that.

Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to