Hi Sean and Brendan

On 2/5/2021 6:14 PM, Sean Chittenden wrote:

To be clear, this is a known issue that needs attention: it is not a
benchmarking setup problem.  Network throughput has a similar problem and
needs similar attention.  Cloud is not a fringe server workload.  -sc

I am so glad you're saying that, because I was afraid I'd have to argue
again and again to make people see there is a problem.

But I come here anyway with some numbers:

This is the Amazon Linux system which I fell back to in desperation, launched,
added ZFS, compiled and set up PostgreSQL-13.2 (whatever newest) and am now

pg_dumpall -h db-old |dd bs=10M status=progress |psql -d postgres

we had sucked the data tables at 32 MB/s that is about the same speed as I
got on the FreeBSD. I assume that might be network bound.

Now it's regenerating the indexes and I see the single EBS g3 volume on fire:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
nvme1n1           0.00     7.00  781.00  824.00 99508.50 76675.00   219.54     
1.98    1.66    1.75    1.57   0.41  66.40
nvme1n1           0.00     3.00 1740.00  456.00 222425.00 33909.50   233.46     
4.43    2.42    2.29    2.91   0.46 100.80
nvme1n1           0.00     0.00 1876.00  159.00 239867.00 16580.00   252.04     
3.56    2.20    2.09    3.47   0.49 100.00
nvme1n1           0.00     0.00 1883.00  151.00 240728.00 15668.00   252.11     
3.49    2.15    2.10    2.83   0.49 100.00
nvme1n1           0.00     0.00 1884.00  152.00 240593.50 15688.00   251.75     
3.54    2.19    2.13    3.00   0.49 100.00
nvme1n1           0.00     1.00 1617.00  431.00 206680.00 50047.50   250.71     
4.50    2.63    2.49    3.13   0.48  98.40
nvme1n1           0.00     1.00 1631.00  583.00 208331.50 47909.00   231.47     
4.75    2.54    2.49    2.66   0.45 100.00
nvme1n1           0.00     0.00 1892.00  148.00 241128.50 15440.00   251.54     
3.20    2.01    1.96    2.73   0.49 100.00

On FreeBSD I haven't got the numbers saved now, but with all the simultaneous
reading and writing activity going on, the system got down to just about 40 MB/s
read and 40 MB/s write, if lucky.

There is heavy read of base tables, then sorting in temporary space 
(resad/write),
then write to the WAL and to the index. This stuff would bring FreeBSD to its
knees.

Here on Linux I'm not even trying to be smart. With the FreeBSD attempt I had
already taken the different read and write streams to different EBS disks each
having a different ZFS pool. That helped a little bit. But not much. On this
Linux thing here I didn't even bother and it's going decently fast. There is
still some 20% iowait, which could probably be optimized by doing what I did for
FreeBSD, separate devices, or, I might try to just make a single ZFS volume
striped from 5 smaller 100 GB drives rather than a single 500 GB drive.

This was just to provide some numbers that I have here.

But I absolutely maintain that this has got to be a well known problem and that
it's not about the finer subtleties of how to make a valid benchmark
comparison.

regards,
-Gunther

_______________________________________________
freebsd-performance@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"

Reply via email to