On Mon, Feb 13, 2017 at 3:34 PM, Bernd Helmle <maili...@oopsware.de> wrote:

> Am Samstag, den 11.02.2017, 00:28 +0100 schrieb Tomas Vondra:
> > Comparing averages of tps, measured on 5 runs (each 5 minutes long),
> > the
> > difference between master and patched master is usually within 2%,
> > which
> > is pretty much within noise.
> >
> > I'm attaching spreadsheets with summary of the results, so that we
> > have
> > it in the archives. As usual, the scripts and much more detailed
> > results
> > are available here:
>
> I've done some benchmarking of this patch against the E850/ppc64el
> Ubuntu LPAR we currently have access to and got the attached results.
> pg_prewarm as recommended by Alexander was used, the tests run 300s
> secs, scale 1000, each with a testrun before. The SELECT-only pgbench
> was run twice each, the write tests only once.
>
> Looks like the influence of this patch isn't that big, at least on this
> machine.
>

Thank you for testing.

Yes, influence seems to be low.  But nevertheless it's important to insure
that there is no regression here.
Despite pg_prewarm'ing and running tests 300s, there is still significant
variation.
For instance, with clients count = 80:
 * pgxact-result-2.txt – 474704
 * pgxact-results.txt – 574844
Could some background processes influence the tests?  Or could it be
another virtual machine?
Also, I wonder why I can't see this variation on the graphs.
Another issue with graphs is that we can't see details of read and write
TPS variation on the same scale, because write TPS values are too low.  I
think you should draw write benchmark on the separate graph.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Reply via email to