[ClusterLabs] The cluster stack in Debian
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hello, I thought it might be nice (with a good recommendation from the ClusterLabs' IRC community), to send an email to the users ML which describes the current state of the cluster stack as it exists in Debian. As some may already know, the Debian-HA team had missed the 'Freeze' date for the Jessie release of Debian. The result of this was that any packages which are maintained by the Debian-HA-Maintainers team - and which were not ready for migration to testing; were not included in the release of Debian Jessie. This is closely followed by a policy which excludes the aforementioned packages from the Jessie (main) repository permanently. Our only option at this point is to prepare the cluster stack and provide an alternative method for Debian users to utilize said packages - until the next Debian release. With that said, the team has become active and is full of new members and sponsors. We have been working quite diligently since October of 2014, in order to prepare the cluster stack (the latest stack) for Debian. So far we have uploaded (into unstable/sid) : Libqb Pacemaker Corosync DLM DRBD/Utils CRMSH Fence-Agents Ruby-Rpam-Ruby19 Ruby-Monkey-Lib PCS Some of the above packages have already migrated to testing, others are still in the NEW queue. We also have prepared, and are reviewing prior to upload: Cluster-Glue Resource-Agents Any who are interested in getting involved and helping out, we have much left to do - even once all the packages are in Debian sid/stretch. For more information, one could visit our wiki [1], browse our public repository [2], or check our homepage [3] from time to time for updates (once we start regularly posting there). I have also been personally hosting a PPA [4] for Debian users, until we are able to provide a more official solution. Please be advised that the packages on my PPA are not actually 100% up-to-date at this time, but are still usable (I'll be updating soon!). I'll do my best to post updates regularly, if for no other reason; to keep interested parties informed of our progress! References: [1] https://wiki.debian.org/Debian-HA/ [2] https://anonscm.debian.org/cgit/debian-ha/ [3] https://debian-ha.alioth.debian.org/ [4] https://ppa.mmogp.com/ Best regards, Richard B Winters (Rik/devrikx) -BEGIN PGP SIGNATURE- Version: OpenPGP.js v1.2.0 Comment: http://openpgpjs.org wsFcBAEBCAAQBQJWq5p1CRAWhq5KG4QstQAAe00P/0q9gs+M8bmMTQWjXmqa taqjTfH09iPB2e0eiYfBhZiJibbB6KLEn9fVxwJG245rgeBAxC7hRHhjWeRn kU9If+8XLaTSo1yENAydcCNmwjsmiPPxHjTqosaT6oBGBgySPgznkMEroR5w P+PtxIIVhhpWLT/Y84QF2nfUe5C5O0+Fon30Fg+i5qWwPx3oXQzqnods5InI 91lMl+pWwBhM8u2oZlo8FbCxoVONuo6OW34YpCBcOh1w+jU1MGAfGGtJkr93 HjK4toeFwqvEP6tmVMKj62u6M96DCueL4Gjyd+7AlWOT7M8Y3VbLMaoFVpsG Q1jWDycH3357pYGROb7dklL/vSY77w5aYbgrFMt5/nVYKi19QS7oGmUl3Nin GlSudcNQjRqN7IYKbHggQRq5XHEwHwbtg7pOaiEgk8CcS1l+cVnqFOFB1DhY 8E7FjWe/hGYA1/ZyRfQuxXzQ3G9faAQtAjyDSfMyLkqY0ShLCpjZPgf8scon r3qLnxU0okbexIThF32ePeBpaDH7L4OuvBH3ULOYRdugbErXZ2D3vfYE+u4F 2dWBLg9Ov9GegaUnIfl2CpdkOv31e77io6ty0GqBZl4aGYNjlj20wmvMpq9K QfsPUeXbU9PG7cUhIx6OnvlBx9SpPu1/IZIVfqH6BzxOBummkkE9pEh05UaM 6bvO =fBFy -END PGP SIGNATURE- ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] The cluster stack in Debian
On 29/01/16 12:01 PM, Richard B Winters wrote: > Hello, > > I thought it might be nice (with a good recommendation from the ClusterLabs' > IRC community), to send an email to the users ML which describes the current > state of the cluster stack as it exists in Debian. > > As some may already know, the Debian-HA team had missed the 'Freeze' date for > the Jessie release of Debian. The result of this was that any packages which > are maintained by the Debian-HA-Maintainers team - and which were not ready > for migration to testing; were not included in the release of Debian Jessie. > This is closely followed by a policy which excludes the aforementioned > packages from the Jessie (main) repository permanently. > > Our only option at this point is to prepare the cluster stack and provide an > alternative method for Debian users to utilize said packages - until the next > Debian release. > > With that said, the team has become active and is full of new members and > sponsors. We have been working quite diligently since October of 2014, in > order to prepare the cluster stack (the latest stack) for Debian. So far we > have uploaded (into unstable/sid) : > > Libqb > Pacemaker > Corosync > DLM > DRBD/Utils > CRMSH > Fence-Agents > Ruby-Rpam-Ruby19 > Ruby-Monkey-Lib > PCS > > Some of the above packages have already migrated to testing, others are still > in the NEW queue. > > We also have prepared, and are reviewing prior to upload: > > Cluster-Glue > Resource-Agents > > > Any who are interested in getting involved and helping out, we have much left > to do - even once all the packages are in Debian sid/stretch. For more > information, one could visit our wiki [1], browse our public repository [2], > or check our homepage [3] from time to time for updates (once we start > regularly posting there). > > I have also been personally hosting a PPA [4] for Debian users, until we are > able to provide a more official solution. Please be advised that the > packages on my PPA are not actually 100% up-to-date at this time, but are > still usable (I'll be updating soon!). > > I'll do my best to post updates regularly, if for no other reason; to keep > interested parties informed of our progress! There is also the clusterlabs developers list (http://clusterlabs.org/mailman/listinfo/developers), if you want to CC development progress reports there. As I said in IRC, I am looking forward to being able to add Debian to the list of well supported distros. Please keep up the good work! > References: > > [1] https://wiki.debian.org/Debian-HA/ > [2] https://anonscm.debian.org/cgit/debian-ha/ > [3] https://debian-ha.alioth.debian.org/ > [4] https://ppa.mmogp.com/ > > > Best regards, > > > Richard B Winters (Rik/devrikx) > > ___ > Users mailing list: Users@clusterlabs.org > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] Cluster resources migration from CMAN to Pacemaker
On 27/01/16 19:41 +0100, Jan Pokorný wrote: > On 27/01/16 11:04 -0600, Ken Gaillot wrote: >> On 01/27/2016 02:34 AM, jaspal singla wrote: >>> 1) In CMAN, there was meta attribute - autostart=0 (This parameter disables >>> the start of all services when RGManager starts). Is there any way for such >>> behavior in Pacemaker? > > Please be more careful about the descriptions; autostart=0 specified > at the given resource group ("service" or "vm" tag) means just not to > start anything contained in this very one automatically (also upon > new resources being defined, IIUIC), definitely not "all services". > > [...] > >> I don't think there's any exact replacement for autostart in pacemaker. >> Probably the closest is to set target-role=Stopped before stopping the >> cluster, and set target-role=Started when services are desired to be >> started. Beside is-managed=false (as currently used in clufter), I also looked at downright disabling "start" action, but this turned out to be a naive approach caused by unclear documentation. Pushing for a bit more clarity (hopefully): https://github.com/ClusterLabs/pacemaker/pull/905 >>> 2) Please put some alternatives to exclusive=0 and __independent_subtree? >>> what we have in Pacemaker instead of these? (exclusive property discussed in the other subthread; as a recap, no extra effort is needed to achieve exclusive=0, exclusive=1 is currently a show stopper in clufter as neither approach is versatile enough) > For __independent_subtree, each component must be a separate pacemaker > resource, and the constraints between them would depend on exactly what > you were trying to accomplish. The key concepts here are ordering > constraints, colocation constraints, kind=Mandatory/Optional (for > ordering constraints), and ordered sets. Current approach in clufter as of the next branch: - __independent_subtree=1 -> do nothing special (hardly can be improved?) - __independent_subtree=2 -> for that very resource, set operations as follows: monitor (interval=60s) on-fail=ignore stop interval=0 on-fail=stop Groups carrying such resources are not unrolled into primitives plus contraints, as the above might suggest (also default kind=Mandatory for underlying order constraints should fit well). Please holler if this is not sound. So when put together with some other changes/fixes, current suggested/informative sequence of pcs commands goes like this: pcs cluster auth ha1-105.test.com pcs cluster setup --start --name HA1-105_CLUSTER ha1-105.test.com \ --consensus 12000 --token 1 --join 60 sleep 60 pcs cluster cib tmp-cib.xml --config pcs -f tmp-cib.xml property set stonith-enabled=false pcs -f tmp-cib.xml \ resource create RESOURCE-script-FSCheck \ lsb:../../..//data/Product/HA/bin/FsCheckAgent.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource create RESOURCE-script-NTW_IF \ lsb:../../..//data/Product/HA/bin/NtwIFAgent.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource create RESOURCE-script-CTM_RSYNC \ lsb:../../..//data/Product/HA/bin/RsyncAgent.py \ op monitor interval=30s on-fail=ignore stop interval=0 on-fail=stop pcs -f tmp-cib.xml \ resource create RESOURCE-script-REPL_IF \ lsb:../../..//data/Product/HA/bin/ODG_IFAgent.py \ op monitor interval=30s on-fail=ignore stop interval=0 on-fail=stop pcs -f tmp-cib.xml \ resource create RESOURCE-script-ORACLE_REPLICATOR \ lsb:../../..//data/Product/HA/bin/ODG_ReplicatorAgent.py \ op monitor interval=30s on-fail=ignore stop interval=0 on-fail=stop pcs -f tmp-cib.xml \ resource create RESOURCE-script-CTM_SID \ lsb:../../..//data/Product/HA/bin/OracleAgent.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource create RESOURCE-script-CTM_SRV \ lsb:../../..//data/Product/HA/bin/CtmAgent.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource create RESOURCE-script-CTM_APACHE \ lsb:../../..//data/Product/HA/bin/ApacheAgent.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource create RESOURCE-script-CTM_HEARTBEAT \ lsb:../../..//data/Product/HA/bin/HeartBeat.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource create RESOURCE-script-FLASHBACK \ lsb:../../..//data/Product/HA/bin/FlashBackMonitor.py \ op monitor interval=30s pcs -f tmp-cib.xml \ resource group add SERVICE-ctm_service-GROUP RESOURCE-script-FSCheck \ RESOURCE-script-NTW_IF RESOURCE-script-CTM_RSYNC \ RESOURCE-script-REPL_IF RESOURCE-script-ORACLE_REPLICATOR \ RESOURCE-script-CTM_SID RESOURCE-script-CTM_SRV \ RESOURCE-script-CTM_APACHE pcs -f tmp-cib.xml resource \ meta SERVICE-ctm_service-GROUP is-managed=false pcs -f tmp-cib.xml \ resource group add SERVICE-ctm_heartbeat-GROUP \ RESOURCE-script-CTM_HEARTBEAT pcs -f tmp-cib.xml resource \ meta SERVICE-ctm_heartbeat-GROUP migration-threshold=3 \ failure-timeout=900 pcs -f tmp-cib.xml \