hi, i remember having seen performance degradation reports on zfsonlinux issue tracker, for example:
https://github.com/openzfs/zfs/issues/8836 to blame zfs for this, you should first make sure it is not related to driver issues or other kernel/disk related stuff, so i would recommend comparing raw disk write performance first, to see if that may also make a difference already. furthermore, i would recommend using more data when doing write performance comparison at such high troughput rate regards roland Am 06.10.20 um 13:12 schrieb Maxime AUGER:
Hello, We notice a significant difference in ZFS performances between proxmox 5.3-5 and proxmox 6.2. Last year when we tested proxmox5.3 on the HPE-DL360Gen9 hardware. This hardware is provided with 8 ssd disks (2x 200-GB OS dedicated mdadm RAID0 and 6x 1-TB ZFS pool for VMs storage) On ZFS pool we measured the peak value at 2.8GB/s (write) Actually on promox6 we measured the peak value at 1.5GB/s (write). One server, ITXPVE03 was running Initially on proxmox 5.3-5. peak performance 2.8GB/s Recently it has been reinstalled on proxmox 6.2 (from iso) peak performance 1.5GB/s to confirm this observation we have extended the checks to the 4 servers, identical hardware and low level software, BIOS and firmwares versions) The measurements confirm the statement All tests are done in serveurs idle state conditions with zero ative workload, all VMs/Containers shutdown. ZFS configuration are identical, no compression, no deduplicatation root@CLIPVE03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.851028 s, 1.2 GB/s root@ITXPV03(PVE6.2):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.722055 s, 1.5 GB/s root@CLIB05PVE02(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.397212 s, 2.6 GB/s root@CLIB05PVE01(PVE5.3-5):~# dd if=/dev/zero of=/zfsraid10/iotest bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.39394 s, 2.7 GB/s At the ZFS level we can notice a difference in the version of zfsutils-linux 0.7.X on PVE5.3-5 (0.7.12) 0.8.X on PVE6.2 (same measure on 0.8.3 and 0.8.4) has anyone experienced this problem ? Maxime AUGER Network Team Leader AURANEXT _______________________________________________ pve-user mailing list [email protected] https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________ pve-user mailing list [email protected] https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
