Re: [Pacemaker] colocate three resources
I think you can use a single colocation with a set of resources. crmsh allows you to create such a colocation with: crm colocation vm_with_disks inf: vm_srv ( ms_disk_R:Master ms_disk_S:Master ) This forces the cluster to place the master resources on the same host, starting them without specific ordering, and then start the VM along with them. Le 9 nov. 2014 11:31, Matthias Teege matthias-gm...@mteege.de a écrit : Hallo, On a cluster I have to place three resources on the same node. ms ms_disk_R p_disk_R ms ms_disk_S p_disk_S primitive vm_srv ocf:heartbeat:VirtualDomain The colocation constraints looks like this: colocation vm_with_disk_R inf: vm_srv ms_disk_R:Master colocation vm_with_disk_S inf: vm_srv ms_disk_S:Master Do I have to add another colocation constraint to define a colocation between disk_R and disk_S. I'm not sure because the documentation says: with-rsc: The colocation target. The cluster will decide where to put this resource first and then decide where to put the resource in the rsc field. In my case the colocation targets are ms_disk_R and ms_disk_S. If pacemaker decides to put disk_R on node A and disk_S on node B vm_srv would not start. I use order constraints to start disks before the vm resource. order disk_R_before_vm inf: ms_disk_R:promote vm_srv:start order disk_S_before_vm inf: ms_disk_S:promote vm_srv:start Thanks Matthias ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] batch-limit with many resources
On 8 Nov 2014, at 10:22 pm, Matthias Teege matthias-gm...@mteege.de wrote: Hello, I have a cluster with 300 resources. A lot of them are using the same monitoring intervalls. Is it necessary to increase the batch-limit to allow pacemaker to run all monitoring scripts in parallel? Short version... it shouldn't be. Which pacemaker version are you on btw? I would recommend 1.1.12 for something that size. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] colocate three resources
On 9 Nov 2014, at 9:28 pm, Matthias Teege matthias-gm...@mteege.de wrote: Hallo, On a cluster I have to place three resources on the same node. ms ms_disk_R p_disk_R ms ms_disk_S p_disk_S primitive vm_srv ocf:heartbeat:VirtualDomain The colocation constraints looks like this: colocation vm_with_disk_R inf: vm_srv ms_disk_R:Master colocation vm_with_disk_S inf: vm_srv ms_disk_S:Master Do I have to add another colocation constraint to define a colocation between disk_R and disk_S. I'm not sure because the documentation says: with-rsc: The colocation target. The cluster will decide where to put this resource first and then decide where to put the resource in the rsc field. In my case the colocation targets are ms_disk_R and ms_disk_S. If pacemaker decides to put disk_R on node A and disk_S on node B vm_srv would not start. Correct, this is why you need the third constraint - as smart as pacemaker is, its nowhere as good as a human brain. So while it is obvious to us that ms_disk_R and ms_disk_S need to go on the same node, pacemaker will need the extra hint. Suggestion, do this: colocation disk_S_with_disk_R inf: ms_disk_S:Master ms_disk_R:Master colocation vm_with_disk_S inf: vm_srv ms_disk_S:Master For ordering you want as much parallelism as possible, for colocation - chains work best. I use order constraints to start disks before the vm resource. order disk_R_before_vm inf: ms_disk_R:promote vm_srv:start order disk_S_before_vm inf: ms_disk_S:promote vm_srv:start Thanks Matthias ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] How to avoid CRM sending stop when ha.cf gets 2nd node configured
On 8 Nov 2014, at 11:58 am, aridh bose ari...@yahoo.com wrote: Hi, While using heartbeat and pacemaker, is it possible to bringup first node which can go as Master, followed by second node which should go as Slave without causing any issues to the first node? Currently, I see a couple of problems in achieving this: 1. Assuming I am not using mcast communication, heartbeat is mandating me to configure second node info either in ha.cf or in /etc/hosts file with associated IP address. Why can't it come up by itself as Master to start with? Because its not using mcast and therefor doesn't know how to talk to other nodes in the future. 2. If I update ha.cf with the 2nd node info and use 'heartbeat -r' CRM first sends stop on the Master before sending start. Appreciate any help or pointers. Thanks, Aridbh. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org