To Strahil and Klaus – I created the vdo devices using default parameters, so ‘auto’ mode was selected by default. vdostatus shows that the current mode is async. The underlying drbd devices are running protocol C, so I assume that vdo should be changed to sync mode?
The VDO service is disabled and is solely under the control of Pacemaker, but I have been unable to get a resource agent to work reliably. I have two nodes. Under normal operation, Node A is primary for disk drbd0, and device vdo0 rides on top of that. Node B is primary for disk drbd1 and device vdo1 rides on top of that. In the event of a node failure, the vdo device and the underlying drbd disk should migrate to the other node, and then that node will be primary for both drbd disks and both vdo devices. The default systemd vdo service does not work because it uses the –all flag and starts/stops all vdo devices. I noticed that there is also a vdo-start-by-dev.service, but there is no documentation on how to use it. I wrote my own vdo-by-dev system service, but that did not work reliably either. Then I noticed that there is already an OCF resource agent named vdo-vol, but that did not work either. I finally tried writing my own OCF-compliant RA, and then I tried writing an LSB-compliant script, but none of those worked very well. My big problem is that I don’t understand how Pacemaker uses the monitor action. Pacemaker would often fail vdo resources because the monitor action received an error when it ran on the standby node. For example, when Node A is primary for disk drbd1 and device vdo1, Pacemaker would fail device vdo1 because when it ran the monitor action on Node B, the RA reported an error. But OF COURSE it would report an error, because disk drbd1 is secondary on that node, and is therefore inaccessible to the vdo driver. I DON’T UNDERSTAND. -Eric From: Strahil Nikolov <hunter86...@yahoo.com> Sent: Monday, May 17, 2021 5:09 AM To: kwenn...@redhat.com; Klaus Wenninger <kwenn...@redhat.com>; Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org>; Eric Robinson <eric.robin...@psmnv.com> Subject: Re: [ClusterLabs] DRBD + VDO HowTo? Have you tried to set VDO in async mode ? Best Regards, Strahil Nikolov On Mon, May 17, 2021 at 8:57, Klaus Wenninger <kwenn...@redhat.com<mailto:kwenn...@redhat.com>> wrote: Did you try VDO in sync-mode for the case the flush-fua stuff isn't working through the layers? Did you check that VDO-service is disabled and solely under pacemaker-control and that the dependencies are set correctly? Klaus On 5/17/21 6:17 AM, Eric Robinson wrote: Yes, DRBD is working fine. From: Strahil Nikolov <hunter86...@yahoo.com><mailto:hunter86...@yahoo.com> Sent: Sunday, May 16, 2021 6:06 PM To: Eric Robinson <eric.robin...@psmnv.com><mailto:eric.robin...@psmnv.com>; Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org><mailto:users@clusterlabs.org> Subject: RE: [ClusterLabs] DRBD + VDO HowTo? Are you sure that the DRBD is working properly ? Best Regards, Strahil Nikolov On Mon, May 17, 2021 at 0:32, Eric Robinson <eric.robin...@psmnv.com<mailto:eric.robin...@psmnv.com>> wrote: Okay, it turns out I was wrong. I thought I had it working, but I keep running into problems. Sometimes when I demote a DRBD resource on Node A and promote it on Node B, and I try to mount the filesystem, the system complains that it cannot read the superblock. But when I move the DRBD primary back to Node A, the file system is mountable again. Also, I have problems with filesystems not mounting because the vdo devices are not present. All kinds of issues. From: Users <users-boun...@clusterlabs.org<mailto:users-boun...@clusterlabs.org>> On Behalf Of Eric Robinson Sent: Friday, May 14, 2021 3:55 PM To: Strahil Nikolov <hunter86...@yahoo.com<mailto:hunter86...@yahoo.com>>; Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org<mailto:users@clusterlabs.org>> Subject: Re: [ClusterLabs] DRBD + VDO HowTo? Okay, I have it working now. The default systemd service definitions did not work, so I created my own. From: Strahil Nikolov <hunter86...@yahoo.com<mailto:hunter86...@yahoo.com>> Sent: Friday, May 14, 2021 3:41 AM To: Eric Robinson <eric.robin...@psmnv.com<mailto:eric.robin...@psmnv.com>>; Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org<mailto:users@clusterlabs.org>> Subject: RE: [ClusterLabs] DRBD + VDO HowTo? There is no VDO RA according to my knowledge, but you can use systemd service as a resource. Yet, the VDO service that comes with thr OS is a generic one and controlls all VDOs - so you need to create your own vdo service. Best Regards, Strahil Nikolov On Fri, May 14, 2021 at 6:55, Eric Robinson <eric.robin...@psmnv.com<mailto:eric.robin...@psmnv.com>> wrote: I created the VDO volumes fine on the drbd devices, formatted them as xfs filesystems, created cluster filesystem resources, and the cluster us using them. But the cluster won’t fail over. Is there a VDO cluster RA out there somewhere already? From: Strahil Nikolov <hunter86...@yahoo.com<mailto:hunter86...@yahoo.com>> Sent: Thursday, May 13, 2021 10:07 PM To: Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org<mailto:users@clusterlabs.org>>; Eric Robinson <eric.robin...@psmnv.com<mailto:eric.robin...@psmnv.com>> Subject: Re: [ClusterLabs] DRBD + VDO HowTo? For DRBD there is enough info, so let's focus on VDO. There is a systemd service that starts all VDOs on the system. You can create the VDO once drbs is open for writes and then you can create your own systemd '.service' file which can be used as a cluster resource. Best Regards, Strahil Nikolov On Fri, May 14, 2021 at 2:33, Eric Robinson <eric.robin...@psmnv.com<mailto:eric.robin...@psmnv.com>> wrote: Can anyone point to a document on how to use VDO de-duplication with DRBD? Linbit has a blog page about it, but it was last updated 6 years ago and the embedded links are dead. https://linbit.com/blog/albireo-virtual-data-optimizer-vdo-on-drbd/ -Eric Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/ Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments. _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/ Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments.
_______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/