Re: [ClusterLabs] Failover IP with Monitoring but not controling the colocated services.

2016-09-05 Thread Stefan Schörghofer
On Mon, 5 Sep 2016 15:04:34 +0200
Klaus Wenninger  wrote:

> On 09/05/2016 01:38 PM, Stefan Schörghofer wrote:
> > Hi List,
> >
> > I am currently trying to setup the following situation in my lab:
> >
> > |--Cluster IP--|
> > | HAProxy instances |HAProxy instances |
> > | Node 1|   Node 2 |
> >
> >
> >
> > Now I've successfully added the Cluster IP resource to pacemaker and
> > tested the failover which worked perfectly.
> >
> > After that I wanted to ensure that all HAProxy instances are
> > running on the Node I want to failover to, so I wrote a
> > ocf-compatible script that supports all necessary options and added
> > it as resource. The script works and monitors the HAProxy incances
> > just fine, it also restarts them if some go down but it also stops
> > the haproxy instances on the standby Node.
> >
> > What I want to have is multible Node's where all HAProxy instances
> > are permanently running, but if something happens on the primary
> > node the Cluster IP should failover to the other (only if the
> > HAProxy instances are running there fine).
> >
> > Is this possible with Pacemake/Corosync? I've read a lot in the
> > documentation and found something about resource_clones and also a
> > resource-type that uses Nagios Plugins. Maybe there is a way with
> > using this?
> >
> > I tried to setup the clone resource, HAProxy instances where
> > running on all nodes.
> > I created a colocation rule for the clone resource and the Cluster
> > IP and after a manual migration all services where stopped.  
> 
> Sounds as if the colocation could have the wrong order.
> Basically the IP-primitive collocated with the HAProxy clones
> should be fine.
> Maybe you could paste the config...

That was the push in the right direction. Thanks for that.
I've checked the order and indeed...it was wrong during my tests.

The config now looks like:

node 167916820: sbg-vm-lb-tcp01
node 167916821: sbg-vm-lb-tcp02
primitive haproxy-rsc ocf:custom:haproxy \
meta migration-threshold=3 \
op monitor interval=60 timeout=30 on-fail=restart
primitive sbg-vm-lb-tcpC1 IPaddr2 \
params ip=10.2.53.22 nic=eth0 cidr_netmask=24 \
meta migration-threshold=2 target-role=Started \
op monitor interval=20 timeout=60 on-fail=restart
clone cl_haproxy-rsc haproxy-rsc \
params globally-unique=false
location cli-ban-sbg-vm-lb-tcpC1-on-sbg-vm-lb-tcp02 sbg-vm-lb-tcpC1
role=Started -inf: sbg-vm-lb-tcp02 colocation loc-sbg-vm-lb-tcpC1 inf:
sbg-vm-lb-tcpC1 cl_haproxy-rsc order ord-sbg-vm-lb-tcpC1 inf:
cl_haproxy-rsc sbg-vm-lb-tcpC1 property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.14-70404b0 \
cluster-infrastructure=corosync \
cluster-name=debian \
stonith-enabled=no \
no-quorum-policy=ignore \
default-resource-stickiness=100 \
last-lrm-refresh=1473069316


And everything works as expected.
Thank you!

regards,
Stefan


> 
> Regards,
> Klaus
> 
> >
> > Can I maybe work with multible HAProxy resources (one per node) and
> > setup the colocation rule with "ClusterIP coloc with (haproxy_node1
> > or haproxy_node2). Does something like this work?
> >
> >
> > Thanks for your input.
> > Best regards,
> > Stefan Schörghofer
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs:
> > http://bugs.clusterlabs.org  
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs:
> http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Failover IP with Monitoring but not controling the colocated services.

2016-09-05 Thread Klaus Wenninger
On 09/05/2016 01:38 PM, Stefan Schörghofer wrote:
> Hi List,
>
> I am currently trying to setup the following situation in my lab:
>
> |--Cluster IP--|
> | HAProxy instances |HAProxy instances |
> | Node 1|   Node 2 |
>
>
>
> Now I've successfully added the Cluster IP resource to pacemaker and
> tested the failover which worked perfectly.
>
> After that I wanted to ensure that all HAProxy instances are running on
> the Node I want to failover to, so I wrote a ocf-compatible script that
> supports all necessary options and added it as resource.
> The script works and monitors the HAProxy incances just fine, it also
> restarts them if some go down but it also stops the haproxy instances
> on the standby Node.
>
> What I want to have is multible Node's where all HAProxy instances are
> permanently running, but if something happens on the primary node the
> Cluster IP should failover to the other (only if the HAProxy instances
> are running there fine).
>
> Is this possible with Pacemake/Corosync? I've read a lot in the
> documentation and found something about resource_clones and also a
> resource-type that uses Nagios Plugins. Maybe there is a way with using
> this?
>
> I tried to setup the clone resource, HAProxy instances where running on
> all nodes.
> I created a colocation rule for the clone resource and the Cluster IP
> and after a manual migration all services where stopped.

Sounds as if the colocation could have the wrong order.
Basically the IP-primitive collocated with the HAProxy clones
should be fine.
Maybe you could paste the config...

Regards,
Klaus

>
> Can I maybe work with multible HAProxy resources (one per node) and
> setup the colocation rule with "ClusterIP coloc with (haproxy_node1 or
> haproxy_node2). Does something like this work?
>
>
> Thanks for your input.
> Best regards,
> Stefan Schörghofer
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Failover IP with Monitoring but not controling the colocated services.

2016-09-05 Thread Stefan Schörghofer
Hi List,

I am currently trying to setup the following situation in my lab:

|--Cluster IP--|
| HAProxy instances |HAProxy instances |
| Node 1|   Node 2 |



Now I've successfully added the Cluster IP resource to pacemaker and
tested the failover which worked perfectly.

After that I wanted to ensure that all HAProxy instances are running on
the Node I want to failover to, so I wrote a ocf-compatible script that
supports all necessary options and added it as resource.
The script works and monitors the HAProxy incances just fine, it also
restarts them if some go down but it also stops the haproxy instances
on the standby Node.

What I want to have is multible Node's where all HAProxy instances are
permanently running, but if something happens on the primary node the
Cluster IP should failover to the other (only if the HAProxy instances
are running there fine).

Is this possible with Pacemake/Corosync? I've read a lot in the
documentation and found something about resource_clones and also a
resource-type that uses Nagios Plugins. Maybe there is a way with using
this?

I tried to setup the clone resource, HAProxy instances where running on
all nodes.
I created a colocation rule for the clone resource and the Cluster IP
and after a manual migration all services where stopped.

Can I maybe work with multible HAProxy resources (one per node) and
setup the colocation rule with "ClusterIP coloc with (haproxy_node1 or
haproxy_node2). Does something like this work?


Thanks for your input.
Best regards,
Stefan Schörghofer

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org