I am running drbd 9 (drbdmanage 0.99.11, drbd-9.0.9) on CentOS7 and have
set up a zfs backed drbd device. Having restarted the servers, on one
node it is now operating as diskless and primary. I noted in the logs
that for some reason the zfs device was seen as busy when drbdmanage
attempted to
Thanks for the tip, Rob.
On 05/10/17 07:49, Roberto Resoli wrote:
Il 04/10/2017 12:37, Martyn Spencer ha scritto:
Hi Jay,
Thank you for your very detailed notes - they are very helpful. Out of
interest, is using cat /proc/drbd still useful with drbd 9? Would
watching drbdsetup status be the
rbdpool # if you get an error here please reboot the server
or check pvscan for additional volumes mapped by lvmonitor incorrectly
vgcreate drbdpool /dev/sdb
On the working node
drbdmanage rn nodename.domain.name --force
drbdmanage an nodename.domain.name 10.x.x.x
Jay
On 2 October 2017 at 1
Thank you for your assistance. Very helpful.
Regards,
Martyn
On 03/10/17 13:59, Robert Altnoeder wrote:
On 10/02/2017 12:37 PM, Martyn Spencer wrote:
I managed to put node1 into a state where it had pending actions that
I could not remove, so decided to remove the node and then re-add it
way to force them to connect to node1 to resynchronise before
I continue?
Many thanks,
Martyn Spencer
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user