On Mon, 19 Jan 2026 21:15:17 +0100
Matthias Petermann <[email protected]> wrote:
> Unfortunately, once LVM enters the picture, I have repeatedly run into
> situations where triggering an FSS snapshot causes severe stalls or
> complete lockups. ZFS zvols, while very attractive feature-wise, showed
> significantly lower performance in my setup compared to raw partitions,
> and also compared poorly to simpler approaches such as CCD or even
> vnd-backed storage.
Maybe it's a NetBSD specific ZFS issue, or overhead of ZFS software
RAID?
As a point of reference, I'm using hardware RAID-5 with FreeBSD-15 and
multithreaded zvol read performance is very close to raw disk. I need
to use multiple concurrent threads to saturate 4 SSDs.
# mfiutil show config
/dev/mfi0 Configuration: 1 arrays, 1 volumes, 0 spares
array 0 of 4 drives:
drive 16 ( 466G) ONLINE <Samsung SSD 870 2B6Q serial=XXX> SATA
drive 14 ( 466G) ONLINE <Samsung SSD 870 2B6Q serial=XXX> SATA
drive 17 ( 466G) ONLINE <Samsung SSD 870 2B6Q serial=XXX> SATA
drive 15 ( 466G) ONLINE <Samsung SSD 870 2B6Q serial=XXX> SATA
volume mfid0 (1396G) RAID-5 32K OPTIMAL spans:
array 0
# geom disk list
Geom name: mfid0
Providers:
1. Name: mfid0
Mediasize: 1498675150848 (1.4T)
Sectorsize: 512
Mode: r1w1e2
descr: (null)
ident: (null)
rotationrate: unknown
fwsectors: 63
fwheads: 255
# zpool status
pool: zroot
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mfid0p2 ONLINE 0 0 0
As you can see below, sequential read performance is very close, at
around 1 GiB/sec.
ZVol test:
# zfs create -V 10G -o volblocksize=32K -o compression=off zroot/zvol_test
# sysperf_disk mode=rd size=10GiB threads=12 : /dev/zvol/zroot/zvol_test
...
Aggregate metrics:
10.00 GiB, 163840.00 Block(s) @ 64.00 KiB/Block
9920.44 msec, 1.01 GiB/sec, 16515.40 Blocks/sec, 728.17 usec/Block
RAID-5 virtual drive test:
# sysperf_disk mode=rd size=10GiB direct threads=12 : /dev/mfid0
...
Aggregate metrics:
10.00 GiB, 163840.00 Block(s) @ 64.00 KiB/Block
8711.16 msec, 1.15 GiB/sec, 18808.05 Blocks/sec, 637.41 usec/Block