Hi

st 15. 1. 2020 v 14:15 odesílatel Imai Yoshikazu <yoshikazu_i...@live.jp>
napsal:

> On 2020/01/13 4:11, Pavel Stehule wrote:
> > The following review has been posted through the commitfest application:
> > make installcheck-world:  tested, passed
> > Implements feature:       tested, passed
> > Spec compliant:           not tested
> > Documentation:            tested, passed
> >
> > I like this patch, because I used similar functionality some years ago
> very successfully. The implementation is almost simple, and the result
> should be valid by used method.
>
> Thanks for your review!
>
> > The potential problem is performance impact. Very early test show impact
> cca 3% worst case, but I'll try to repeat these tests.
>
> Yes, performance impact is the main concern. I want to know how it
> affects performance in various test cases or on various environments.
>
> > There are some ending whitespaces and useless tabs.
> >
> > The new status of this patch is: Waiting on Author
> I attach v4 patches removing those extra whitespaces of the end of lines
> and useless tabs.
>

today I run 120 5minutes pgbench tests to measure impact of this patch.
Result is attached.

test

PSQL="/usr/local/pgsql/bin/psql"
PGBENCH="/usr/local/pgsql/bin/pgbench"
export PGDATABASE=postgres

echo "******* START *******" > ~/result.txt

for i in 1 5 10 50 100
do
  echo "scale factor $i" >> ~/result.txt

  $PSQL -c "create database bench$i"
  $PGBENCH -i -s $i "bench$i"

  for c in 1 5 10 50
  do
    $PGBENCH -c $c -T 300 "bench$i" >> ~/result.txt
  done

  $PSQL -c "vacuum full" "bench$i"
  $PSQL -c "vacuum analyze" "bench$i"

  for c in 1 5 10 50
  do
    $PGBENCH -S -c $c -T 300 "bench$i" >> ~/result.txt
  done

  $PSQL -c "drop database bench$i"
done

Tested on computer with 4CPU, 8GB RAM - configuration: shared buffers 2GB,
work mem 20MB

The result is interesting - when I run pgbench in R/W mode, then I got +/-
1% changes in performance. Isn't important if tracking time is active or
not (tested on Linux). In this mode the new code is not on critical path.

More interesting results are from read only tests (there are visible some
higher differences)

for scale 5/ and 50 users - the tracking time increase performance about
12% (same result I got for scale/users 10/50), in other direction patched
but without tracking time decreases performance about 10% for for 50/50
(with without tracking time) and 100/5

Looks so for higher scale than 5 and higher number of users 50 the results
are not too much stable (for read only tests - I repeated tests) and there
overhead is about 10% from 60K tps to 55Ktps - maybe I hit a hw limits (it
running with 4CPU)

Thanks to Tomas Vondra and 2ndq for hw for testing

Regards

Pavel

 wait_event_type |      wait_event       |   calls    |    times
-----------------+-----------------------+------------+--------------
 Client          | ClientRead            | 1489681408 | 221616362961
 Lock            | transactionid         |  103113369 |  71918794185
 LWLock          | WALWriteLock          |  104781468 |  20865855903
 Lock            | tuple                 |   21323744 |  15800875242
 IO              | WALSync               |   50862170 |   8666988491
 LWLock          | lock_manager          |   18415423 |    575308266
 IO              | WALWrite              |   51482764 |    205775892
 LWLock          | buffer_content        |   15385387 |    168446128
 LWLock          | wal_insert            |    1502092 |     90019731
 IPC             | ProcArrayGroupUpdate  |     178238 |     46527364
 LWLock          | ProcArrayLock         |     587356 |     13298246
 IO              | DataFileExtend        |    2715557 |     11615216
 IPC             | ClogGroupUpdate       |      54319 |     10622013
 IO              | DataFileRead          |    5805298 |      9596545
 IO              | SLRURead              |    9518930 |      7166217
 LWLock          | CLogControlLock       |     746759 |      6783602





> --
> Yoshikazu Imai
>

Attachment: pgbench.ods
Description: application/vnd.oasis.opendocument.spreadsheet

Reply via email to