you described your problem, but didn't show any logs or cluster config. 2016-08-10 23:47 GMT+02:00 Darren Kinley <dkin...@mdacorporation.com>: > The default lvm.conf's filter includes /dev/drbdX > > # By default we accept every block device except udev names, floppy and > cdrom drives: > filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", > "r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ] > > -----Original Message----- > From: emmanuel segura [mailto:emi2f...@gmail.com] > Sent: Wednesday, August 10, 2016 2:33 PM > To: Cluster Labs - All topics related to open-source clustering welcomed > <users@clusterlabs.org> > Subject: Re: [ClusterLabs] ocf:heartbeat:LVM or /etc/lvm/lvm.conf settings > question > > your lvm filter include the drbd devices /dev/drbdX ? > > 2016-08-10 21:38 GMT+02:00 Darren Kinley <dkin...@mdacorporation.com>: >> Hi, >> >> I have an LVM logical volume and used DRBD to replicate it to another >> server. >> The /dev/drbd0 has PV/VG/LVs which are mostly working. >> I have colocation and order constraints that bring up a VIP, promote >> DRBD and start LVM plus file systems. >> >> The problem arises when I take the active node offline. >> At that point the VIP and DRBD master move but the PV/VG are not >> scanned/activated, the file systems are not mounted and “crm status” >> reports an error for the ocf:heartbeat:LVM resource >> >> “Volume group [replicated] does not exist or contains an error! >> Using volume group(s) on command line.” >> >> At this point the /dev/drbd0 physical volume is not known to the >> server and the fix requires >> >> root# pvscan –cache /dev/drbd0 >> root# crm resource cleanup grp-ars-lvm-fs >> >> Is there an ocf:heartbeat:LVM setting or /etc/lvm/lvm.conf settings to >> force the PV/VGs to come online? >> It is not clear whether the RA script “exclusive” or “tag” settings >> are needed or there is a corresponding lvm.conf setting. >> >> Is l”vm.conf write_cache_state = 0” recommended by the DRBD User Guide >> correct? >> >> Thanks, >> Darren >> >> >> >> _______________________________________________ >> Users mailing list: Users@clusterlabs.org >> http://clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org Getting started: >> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org >> > > > > -- > .~. > /V\ > // \\ > /( )\ > ^`~'^ > > _______________________________________________ > Users mailing list: Users@clusterlabs.org > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org Getting started: > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > _______________________________________________ > Users mailing list: Users@clusterlabs.org > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org
-- .~. /V\ // \\ /( )\ ^`~'^ _______________________________________________ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org