Hi Alex. Are you comparing to an earlier Lustre version, and when you say "local zfs" do you mean the pool imported but with the OST unmounted, or another pool where multihost=off?
________________________________________ Od: lustre-discuss <[email protected]> v imenu Alex Vodeyko via lustre-discuss <[email protected]> Poslano: torek, 07. oktober 2025 02:45 Za: [email protected] Zadeva: [lustre-discuss] Constant small writes on ZFS backend even while idle Hi, I'm in the process of testing lustre-2.15.7_next on rocky-9.6, kernel 5.14.0-570.17.1.el9_6.x86_64, zfs-2.3.4. 84 disk shelf, multipath. 2x OSTs per OSS OST is on the zpool with 4x(8+2)raidz2=40 hdds configuration (btw - also tested on draid - the same problem). atime=off (also tested with relatime=on) recordsize=1M, compression=off During benchmarks I've found that even on the completely idle system, zpool-iostat shows 40-160 4k (ashift=12) writes (1-4 per hdd) every second. # zpool iostat 1 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- .. ost00 482G 145T 0 158 0 634K ost01 401G 145T 0 40 0 161K ---------- ----- ----- ----- ----- ----- ----- ost00 482G 145T 0 40 0 161K ost01 401G 145T 0 157 0 629K ---------- ----- ----- ----- ----- ----- ----- ost00 482G 145T 0 40 0 161K ost01 401G 145T 0 40 0 161K ---------- ----- ----- ----- ----- ----- ----- ost00 482G 145T 0 38 0 153K ost01 401G 145T 0 39 0 157K Could you please advise if I can turn off something (probably in lustre, because local zfs does not show this behaviour) to avoid these writes because they affect read performance (and cause huge cpu load average and iowait numbers especially during multiple concurrent reads from the single OST). Many thanks, Alex _______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
