Have you altered the following option in your lvm.conf to have a list of your
VGs that can be activated?
Make sure you add any local VG required for your system to boot else your machine will fail to restart and you will only
notice after a new kernel which creates a new initrd.
volume_list=[]
Eric Robinson wrote on 25/02/2016 14:49:
Yes indeed., I am using Pacemaker.
--
Eric Robinson
Chief Information Officer
Physician Select Management, LLC
775.885.2211 x 111
*From:* [email protected]
[mailto:[email protected]] *On Behalf Of *Ricardo Branco
*Sent:* Thursday, February 25, 2016 1:06 AM
*To:* [email protected]
*Subject:* Re: [DRBD-user] Having Trouble with LVM on DRBD
Are you using pacemaker?
*From: *Eric Robinson
*Sent: *Thursday, 25 February 2016 08:47
*To: *[email protected] <mailto:[email protected]>
*Subject: *[DRBD-user] Having Trouble with LVM on DRBD
I have a 2-node cluster, where each node is primary for one drbd volume and secondary for the other node’s drbd
volume. Replication is A->B for drbd0 and A<-B for drbd1. I have a logical volume and filesystem on each drbd device.
When I try to failover resources, the filesystem fails to mount because lvdisplay shows the logical volume is listed
as “not available” on the target node. Is there some trick to getting LVM on DRBD to fail over properly?
--
Eric Robinson
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user