Hi,

On 1/19/26 22:57, Niels Dettenbach wrote:
Hi Matthias,

Am 19.01.26 um 21:15 schrieb Matthias Petermann <[email protected]>:

- Reliable, consistent online backups from running guests
- Good read/write performance, ideally close to raw disk access
- Flexibility in terms of free space allocation (on demand)

after testing / ecperiencing a lot monthes ago we switched from LVM Dom0 (linux dom0) to dom0 with a zfs pool (single device zpool on a hardware array in our case) providing zvols as block devices for the domu.

Even with lz4 compression it is much faster then LVM for us while we save >50% of disk space. We run i.e. internet servers and databases on it.


Just out of curiosity, what are the hardware specifications of this machine? I’ve realized that I forgot to include two additional pieces of context in my original post. First (as mentioned in my reply to Hauke), I’m using WAPBL in the guest filesystems. Second, this setup runs on a low-spec Intel NUC7CJYH (8 GB RAM, dual-core Celeron J4005).

I’m wondering whether the relatively weak CPU might be a bottleneck. I seem to recall that during my load tests - especially with ZVOLs - I observed a significant number of TLB shootdowns, whereas this did not occur with raw/CCD devices. Could these observations be related?

Snapshots are  done by a script (i call it xen snapper) i can provide you as open source- with i.e (multiple) daily + weekly snaps (amount of days/weeks could be configured) while i do backup simply by zfs replication per „syncoid“ (from sanoid)to another internal zpool + a external on WAN. Only changed blocks are replicated daily, saving lot of time as traffi even over former incremental backup solutions (see i.e. xen-backup which i formerly wrote for lvm / tar). once a month i do a third backup syncoid to a mac book with external thunderbolt SSD which is one zfs pool ad well (crucial x10 6TB).

That sounds interesting and reminds me of an experimental setup I once built with QEMU/nvmm. I mirrored ZVOL snapshots to a NAS and exposed them as iSCSI block devices for recovery purposes.

ZFS would be my preferred setup as well, and it might be worth trying it on a secondary system to identify potential issues and possibly help with fixes. I don’t mind that NetBSD’s ZFS isn’t at the latest upstream level (still based on illumos, if I recall correctly), but I have run into some issues occasionally - possibly related to the combination with relatively weak hardware.

There is currently no more elegant setup i experienced.

if you are interested in my snapper script and other small maintenance tools, im happy to provide it as open source.

Sure, I’d be happy to take a look. I always enjoy reviewing such tools for inspiration and to see how others solve problems similar to the ones I’m facing. That was also the motivation to show my ccdtool as a kind of "poor man’s LVM.".


Best regards
Matthias





--
Für alle, die digitale Systeme verstehen und gestalten wollen:
jede Woche neue Beiträge zu Architektur, Souveränität und Systemdesign.
👉 https://www.petermann-digital.de/blog

Reply via email to