On Wed, Dec 9, 2015 at 10:44 AM, Stas Kelvich <s.kelv...@postgrespro.ru> wrote:
> Hello.
>
> While working with cluster stuff (DTM, tsDTM) we noted that postgres 2pc 
> transactions is approximately two times slower than an ordinary commit on 
> workload with fast transactions — few single-row updates and COMMIT or 
> PREPARE/COMMIT. Perf top showed that a lot of time is spent in kernel on 
> fopen/fclose, so it worth a try to reduce file operations with 2pc tx.
>

I've tested this through my testing harness which forces the database
to go through endless runs of crash recovery and checks for
consistency, and so far it has survived perfectly.

...

>
> Now results of benchmark are following (dual 6-core xeon server):
>
> Current master without 2PC: ~42 ktps
> Current master with 2PC: ~22 ktps
> Current master with 2PC: ~36 ktps

Can you give the full command line?  -j, -c, etc.

>
> Benchmark done with following script:
>
> \set naccounts 100000 * :scale
> \setrandom from_aid 1 :naccounts
> \setrandom to_aid 1 :naccounts
> \setrandom delta 1 100
> \set scale :scale+1

Why are you incrementing :scale ?

I very rapidly reach a point where most of the updates are against
tuples that don't exist, and then get integer overflow problems.

> BEGIN;
> UPDATE pgbench_accounts SET abalance = abalance - :delta WHERE aid = 
> :from_aid;
> UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :to_aid;
> PREPARE TRANSACTION ':client_id.:scale';
> COMMIT PREPARED ':client_id.:scale';
>

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to