Hi!

for a small to medium high availability cluster based on simple
server/standard hardware (SATA drives), we build an DRBD8 Cluster with 2
Nodes.

2 x Hardware (2 x mdm raid 5)
2 x DRBD /dev/drbd0 running nearly fine
File system on DRBD Resource today ocfs2.
Kernel on Host: 4.20.6, 8 CPU
Scheduler on Host: mp-deadline
Network for DRBD dedicated, ptp, 10GBe.

virt hosts are managed by libvirt/kvm/qemu:
Guests: Debian Jessie, Stretch, Buster (qcow2, scheduler noop/none,
virtio-net, virtio storage, file system btrfs noatime,nodiratime)
And on demand: Win7, not permanent online

This solutions works nearly fine, but:

1.
The available storage bandwidth in the guests is max 30 M Byte/sec,
looks a little to slow.

2.
On heavy I/O like BTRFS operations in one guest (scrub, balance,
de-fragment), the host gets a huge load problem, the host load runs
against 40, 50, 60, 80 ..., even iothreads are set to 4.
The guest latency is arisen a lot, nfs clients to the guest are spamming
"nfs server not responding" ...
After a few minutes the load impact solves by itself...

3.
sometimes the guest looses network connectivity, even during high load
situations on DRBD, like read of drbd8/ocfs2 and write backup to
external disks.

4.
On OCFS2 the read operations are breaking down to 10, 20 M Byte/sec
during rsync backups of the DRBD8/OCFS2 Resource.

About 99% of production time, all is really fine, we are worrying about
the 1% in case of running backups and the side effects of the load.

Any ideas?
Has anybody seen those effects in practice?


mit freundlichen Grüßen
Jürgen Sauer
-- 
Jürgen Sauer - automatiX GmbH,
+49-4209-4699, juergen.sa...@automatix.de
Geschäftsführer: Jürgen Sauer,
Gerichtstand: Amtsgericht Walsrode • HRB 120986
Ust-Id: DE191468481 • St.Nr.: 36/211/08000
GPG Public Key zur Signaturprüfung:
http://www.automatix.de/juergen_sauer_publickey.gpg

<<attachment: juergen_sauer.vcf>>

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to