Re: [ClusterLabs] design question to DRBD
On 06/23/2016 04:57 AM, Lentes, Bernd wrote: > > What i mean with "less complicated" is that i prefer to have everything > managed by pacemaker and not some stuff by pacemaker and some stuff by init. > This is more overseeable. I'd agree to that except I am regularly locking up pacemaker-controlled active/passive drbd with > Jun 25 15:49:36 lionfish drbd(drbd_storage)[28984]: WARNING: raid still > Primary, demoting. > Jun 25 15:49:36 lionfish kernel: block drbd0: State change failed: Device is > held open by someone > Jun 25 15:49:36 lionfish kernel: block drbd0: state = { cs:Connected > ro:Primary/Secondary ds:UpToDate/UpToDate r- } > Jun 25 15:49:36 lionfish kernel: block drbd0: wanted = { cs:Connected > ro:Secondary/Secondary ds:UpToDate/UpToDate r- } > Jun 25 15:49:36 lionfish drbd(drbd_storage)[28984]: ERROR: raid: Called > drbdadm -c /etc/drbd.conf secondary raid > Jun 25 15:49:36 lionfish drbd(drbd_storage)[28984]: ERROR: raid: Exit code 11 > Jun 25 15:49:36 lionfish drbd(drbd_storage)[28984]: ERROR: raid: Command > output: > Jun 25 15:49:36 lionfish drbd(drbd_storage)[28984]: WARNING: raid still > Primary, demoting. -- repeat until I hit the power button. All it takes is adding a resource that depends on drbd fs and fails to start. So at this point I'm having doubts about active/passive drbd + pacemaker being more maintainable than active-active drbd + gfs2. (That's of course because I haven't looked into gfs lock manager: I'm sure it sucks just as hard only differently.) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
- On Jun 22, 2016, at 11:48 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote: > On 06/22/2016 04:29 PM, Klaus Wenninger wrote: >> On 06/22/2016 11:17 PM, Lentes, Bernd wrote: > >>> I'm thinking about active/active. But i think active/passive with a >>> non-cluster fs is less complicated. >> But you will need something to control DRBD - especially in the >> active/passive-case. >> And the services/IPs would probably have to be pulled to the active side. > > It looks like with modern linux kernels you don't have to > re-bind()/listen() anymore when an IP address is added. So you can start > services bound to '*' from init and have pacemaker only manage the > shared ip address. > > But yes, with active/passive DRBD you need something to control DRBD and > mount DRBD FS and then start services that depend on DRBD FS. > > Active-active should let you have your filesystem mounted on both nodes > at once and have things running from init. I never tried it myself so I > don't know which of them would be "less complicated". > > -- What i mean with "less complicated" is that i prefer to have everything managed by pacemaker and not some stuff by pacemaker and some stuff by init. This is more overseeable. Bernd Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.) Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
On 06/22/2016 04:29 PM, Klaus Wenninger wrote: > On 06/22/2016 11:17 PM, Lentes, Bernd wrote: >> I'm thinking about active/active. But i think active/passive with a >> non-cluster fs is less complicated. > But you will need something to control DRBD - especially in the > active/passive-case. > And the services/IPs would probably have to be pulled to the active side. It looks like with modern linux kernels you don't have to re-bind()/listen() anymore when an IP address is added. So you can start services bound to '*' from init and have pacemaker only manage the shared ip address. But yes, with active/passive DRBD you need something to control DRBD and mount DRBD FS and then start services that depend on DRBD FS. Active-active should let you have your filesystem mounted on both nodes at once and have things running from init. I never tried it myself so I don't know which of them would be "less complicated". -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
On 06/22/2016 11:17 PM, Lentes, Bernd wrote: > > Ursprüngliche Nachricht > Von: Dimitri Maziuk <dmaz...@bmrb.wisc.edu> > Datum: 22.06.2016 21:23 (GMT+01:00) > An: users@clusterlabs.org > Betreff: Re: [ClusterLabs] design question to DRBD > > On 06/22/2016 02:13 PM, Lentes, Bernd wrote: > > > > > I'm thinking about starting drbd and the ocfs fs by init. I don't > see the strong need having it controlled by pacemaker. > > But of course the mix is more difficult to maintain. > > Are you going to use active-active drbd with a cluster filesystem? > Otherwise it'll be mounted on one node only and you can't run your > webapp on the other as documentroot etc. are not available > > I'm thinking about active/active. But i think active/passive with a > non-cluster fs is less complicated. But you will need something to control DRBD - especially in the active/passive-case. And the services/IPs would probably have to be pulled to the active side. > > Bernd > Helmholtz Zentrum Muenchen > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > Ingolstaedter Landstr. 1 > 85764 Neuherberg > www.helmholtz-muenchen.de > Aufsichtsratsvorsitzende: MinDir´in Baerbel Brumme-Bothe > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate > Schlusen (komm.) > Registergericht: Amtsgericht Muenchen HRB 6466 > USt-IdNr: DE 129521671 > > > ___ > Users mailing list: Users@clusterlabs.org > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
Ursprüngliche Nachricht Von: Dimitri Maziuk <dmaz...@bmrb.wisc.edu> Datum: 22.06.2016 21:23 (GMT+01:00) An: users@clusterlabs.org Betreff: Re: [ClusterLabs] design question to DRBD On 06/22/2016 02:13 PM, Lentes, Bernd wrote: > > I'm thinking about starting drbd and the ocfs fs by init. I don't see the > strong need having it controlled by pacemaker. > But of course the mix is more difficult to maintain. Are you going to use active-active drbd with a cluster filesystem? Otherwise it'll be mounted on one node only and you can't run your webapp on the other as documentroot etc. are not available I'm thinking about active/active. But i think active/passive with a non-cluster fs is less complicated. Bernd Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.) Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 12952167___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
On 06/22/2016 02:13 PM, Lentes, Bernd wrote: > > I'm thinking about starting drbd and the ocfs fs by init. I don't see the > strong need having it controlled by pacemaker. > But of course the mix is more difficult to maintain. Are you going to use active-active drbd with a cluster filesystem? Otherwise it'll be mounted on one node only and you can't run your webapp on the other as documentroot etc. are unavailable there. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
- On Jun 22, 2016, at 8:34 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote: > On 06/22/2016 01:00 PM, Lentes, Bernd wrote: >> - On Jun 22, 2016, at 7:17 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu >> wrote: > >>> Does your webapp ever write to /srv/www? > >> it does. > > Yeah, OK, it that case you want DRBD so the writes go to both nodes at > once. > > If you have to use DRBD anyway, then you might as well put DocumentRoot, > cgi-bin, etc. on DRBD and use ocf:heartbeat:symlink resource to point > apache to the right directories. Then you have to let the cluster start > apache after the symlink's updated (after DRBD FS is mounted). > > Similarly you might as well share the database storage the same way and > let the cluster control the DB server. > > You could do a mix-and-match setup with some bits completely independent > and running out of init while some other parts are tied to DRBD and > controlled by the cluster, but that's going to be a pain maintenance-wise. > My idea is to use mysql replication. The ressource agent manages which is master and which is slave. The database is on a local harddisk on each node. The IP pointing to the webapp is also a ressource managed by the cluster. If i have /srv/www on a drbd directories like DocumentRoot, cgi-bin (which are situated under /srv/www) are automatical on the drbd device. I'm thinking about starting drbd and the ocfs fs by init. I don't see the strong need having it controlled by pacemaker. But of course the mix is more difficult to maintain. Bernd Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.) Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
On 06/22/2016 01:00 PM, Lentes, Bernd wrote: > - On Jun 22, 2016, at 7:17 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote: >> Does your webapp ever write to /srv/www? > it does. Yeah, OK, it that case you want DRBD so the writes go to both nodes at once. If you have to use DRBD anyway, then you might as well put DocumentRoot, cgi-bin, etc. on DRBD and use ocf:heartbeat:symlink resource to point apache to the right directories. Then you have to let the cluster start apache after the symlink's updated (after DRBD FS is mounted). Similarly you might as well share the database storage the same way and let the cluster control the DB server. You could do a mix-and-match setup with some bits completely independent and running out of init while some other parts are tied to DRBD and controlled by the cluster, but that's going to be a pain maintenance-wise. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
- On Jun 22, 2016, at 7:17 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote: > On 06/22/2016 11:28 AM, Lentes, Bernd wrote: > >> yes, that's a good hint. I will not synchronize /usr/lib/perl with DRBD. >> But for /srv/www it should be fine ? > > Does your webapp ever write to /srv/www? If not I would consider running > two copies of everything with ZFS as a backing store and transactional > replication on the database side, and have only the floating IP address > controlled by the cluster. > > -- > Dimitri Maziuk > Programmer/sysadmin > BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu Hi Dimitri, it does. Bernd Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.) Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
On 06/22/2016 11:28 AM, Lentes, Bernd wrote: > yes, that's a good hint. I will not synchronize /usr/lib/perl with DRBD. > But for /srv/www it should be fine ? Does your webapp ever write to /srv/www? If not I would consider running two copies of everything with ZFS as a backing store and transactional replication on the database side, and have only the floating IP address controlled by the cluster. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signature ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
- On Jun 22, 2016, at 3:59 PM, Klaus Wenninger kwenn...@redhat.com wrote: > On 06/22/2016 02:30 PM, Lentes, Bernd wrote: >> Hi, >> >> we have a two node cluster. It is running a Web-Application. Web-Application >> needs a MySQL Database, has static and dynamic (perlscripts) webpages. >> I will make the DB HA with MySQL replication. >> From time to time it's likely that something in the webapp is changed, so we >> have to edit some scripts or install a perl module. >> I would like have the changes automatically synchronized to the other side, >> without any manual intervention. And also without knowing which node is the >> active one. >> I'm thinking about putting /srv/www and /usr/lib/perl5 on a DRBD device in an >> active/active webapp. For that i need a cluster FS and DLM, right ? >> This should synchronize automatically in both directions ? > How are you applying your updates? > Especially under /usr/lib/perl5 the packet-management should be used. > That said you would confuse your packet-management when the files change > without the local database being updated. Hi Klaus, yes, that's a good hint. I will not synchronize /usr/lib/perl with DRBD. But for /srv/www it should be fine ? >> Do the services have to a ressource for this setup of DRBD or is it possible >> to >> have them as "normal" services, started by init. >> Or is it better to have them as ressources because other services will also >> run >> in this HA-system (likely some virtual machines) ? > If you are having the mentioned subtrees on DRBD mounting a filesystem, > after the drbd-device is up, will be involved and > doing that is probably not a good idea while the subtree is actively used. > So I would opt for having everything that is using the subtree under HA > control. And that's a more consistent setup. Bernd Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen (komm.) Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] design question to DRBD
On 22/06/16 08:30 AM, Lentes, Bernd wrote: > Hi, > > we have a two node cluster. It is running a Web-Application. Web-Application > needs a MySQL Database, has static and dynamic (perlscripts) webpages. > I will make the DB HA with MySQL replication. > From time to time it's likely that something in the webapp is changed, so we > have to edit some scripts or install a perl module. > I would like have the changes automatically synchronized to the other side, > without any manual intervention. And also without knowing which node is the > active one. > I'm thinking about putting /srv/www and /usr/lib/perl5 on a DRBD device in an > active/active webapp. For that i need a cluster FS and DLM, right ? > This should synchronize automatically in both directions ? > Do the services have to a ressource for this setup of DRBD or is it possible > to have them as "normal" services, started by init. > Or is it better to have them as ressources because other services will also > run in this HA-system (likely some virtual machines) ? > > Thanks. > > > Bernd Our approach to this problem is a 2-node cluster using DRBD to back virtual machines, and then we make he VMs the HA service. This way, to you and whomever works on the system, you can ignore the HA stuff and treat it like a regular server. If anything happens to the current host, the server reboots on the good node. Being a VM, reboots are generally quite quick. We've detailed how we build our system in detail here: https://alteeve.ca/w/AN!Cluster_Tutorial_2 However, you may want to use pacemaker instead of cman+rgmanager. In that case, the tutorial above is still useful as it explains the approach. You can just mentally switch "cman+rgmanager" for "pacemaker", adjust the actual commands and the rest of the guide works fine. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org