Hi all,

I'm trying out DRBD Pacemaker HA Cluster on Proxmox 5.2

I have 2 identical servers connected with 2 x 1 Gbps links in bond_mode balance-rr.

The bond is working fine; I get a transfer rate of 150 MB/s with scp.

Following this guide: https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/ was going  smoothly up until:

drbdadm -- --overwrite-data-of-peer primary r0/0

cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:10944 nr:0 dw:0 dr:10992 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:3898301536
    [>....................] sync'ed:  0.1% (3806932/3806944)M
    finish: 483:25:13 speed: 2,188 (2,188) K/sec

The transfer rate is horribly slow and at this pace it's going to take 20 days for two 4 TB volumes to sync!

That's almost 15 times slower comparing with the guide video (8:30): https://www.youtube.com/watch?v=WQGi8Nf0kVc

The volumes have been zeroed and contain no live data yet.

My sdb disks are logical drives (hardware RAID) set up as RAID50 with the defaults:

Strip size: 128 KB
Access policy: RW
Read policy: Normal
Write policy: Write Back with BBU
IO policy: Direct
Drive Cache: Disable
Disable BGI: No

Performance looks good when tested with hdparm:

hdparm -tT /dev/sdb1

/dev/sdb1:
 Timing cached reads:   15056 MB in  1.99 seconds = 7550.46 MB/sec
 Timing buffered disk reads: 2100 MB in  3.00 seconds = 699.81 MB/sec

The volumes have been zeroed and contain no live data yet.

It seems to be a problem with default DRBD settings.

Can anybody recommend optimal tweaks specific to my environment?

Regards,
Adam

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to