Re: [ClusterLabs] Failover question

2017-03-15 Thread Ken Gaillot
Sure, just add a colocation constraint for virtual_ip with proxy.

On 03/15/2017 05:06 AM, Frank Fiene wrote:
> Hi,
> 
> Another beginner question:
> 
> I have configured a virtual IP resource on two hosts and an apache resource 
> cloned on both machines like this
> 
> pcs resource create virtual_ip ocf:heartbeat:IPaddr2 params ip= 
> op monitor interval=10s
> pcs resource create proxy lsb:apache2 
> statusurl="http://127.0.0.1/server-status"; op monitor interval=15s clone
> 
> 
> Will the IP failover if the Apache server on the Master has a problem?
> The Apache is just acting as a proxy, so I thought it would be faster to have 
> it already running on both machines.
> 
> 
> Kind Regards! Frank
> — 
> Frank Fiene
> IT-Security Manager VEKA Group
> 
> Fon: +49 2526 29-6200
> Fax: +49 2526 29-16-6200
> mailto: ffi...@veka.com
> http://www.veka.com
> 
> PGP-ID: 62112A51
> PGP-Fingerprint: 7E12 D61B 40F0 212D 5A55 765D 2A3B B29B 6211 2A51
> Threema: VZK5NDWW
> 
> VEKA AG
> Dieselstr. 8
> 48324 Sendenhorst
> Deutschland/Germany
> 
> Vorstand/Executive Board: Andreas Hartleif (Vorsitzender/CEO),
> Dr. Andreas W. Hillebrand, Bonifatius Eichwald, Elke Hartleif, Dr. Werner 
> Schuler,
> Vorsitzender des Aufsichtsrates/Chairman of Supervisory Board: Ulrich Weimer
> HRB 8282 AG Münster/District Court of Münster

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Master/Slave DRBD not active on asymmetric cluster

2017-03-15 Thread Bruyninckx Kristof
Hallo Klaus,

Yes, indeed collocation was the culprit.

I've removed the constraint and replaced it with a collocation with the master.

#pcs constraint colocation add master drbd-demo-resource-clone with ClusterIP 
INFINITY

And now it work like a charm, Master & Slave get started the nodes that have 
permission.

Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]
 Masters: [ monnod02 ]
 Slaves: [ monnod01 ]

# pcs constraint

Colocation Constraints:
  db-data with drbd-demo-resource-clone (score:INFINITY) (with-rsc-role:Master)
  pgsql_service with db-data (score:INFINITY)
  drbd-demo-resource-clone with ClusterIP (score:INFINITY) (rsc-role:Master) 
(with-rsc-role:Started)


Thanks for your answer !

Cheers,

Kristof Bruyninckx
System Engineer

-Original Message-
From: Klaus Wenninger [mailto:kwenn...@redhat.com] 
Sent: woensdag 15 maart 2017 9:42
To: Cluster Labs - All topics related to open-source clustering welcomed 

Subject: Re: [ClusterLabs] Master/Slave DRBD not active on asymmetric cluster

Hi!

I guess the collocation with ClusterIP is the culprit.
It leads to the clone not being started where ClusterIP is not running.
Guess what you'd rather want is a collocation with just the master-role of the 
clone.

Regards,
Klaus

On 03/14/2017 03:44 PM, Bruyninckx Kristof wrote:
>
> Hello,
>
>  
>
> Currently I've tried to setup a 3 node asymmetric cluster, with the 
> 3th node only being used as a tie breaker.
>
>  
>
> monnod01 & monnod02 :
>
> *centos 7.3
>
> *pacemaker-1.1.15-11.el7_3.2.x86_64,
>
> *corosync-2.4.0-4.el7.x86_64
>
> *drbd84-utils-8.9.8-1.el7.elrepo.x86_64
>
> *PostgreSQL 9.4
>
> monquor:
>
> *centos 7.3
>
> *pacemaker-1.1.15-11.el7_3.2.x86_64
>
> *corosync-2.4.0-4.el7.x86_64
>
> *no drbd installed.
>
>  
>
> Now I've noticed that the master/slave drbd resource only activates 
> the master side and not also the slave side allowing the drbd to 
> actually sync between each other. I've setup a 2 node cluster, and 
> there it works without any issue.
>
> But when I try to do the same, but with a 3th node, and
>
>  
>
> /pcs property set symmetric-cluster=false/
>
>  
>
> For some reason it keeps adding the 3the node as a stopped resource in 
> the master slave setup and it doesn't mention a slave resource.
>
>  
>
> /pcs status/
>
> /Online: [ monnod01 monnod02 monquor ]/
>
> / /
>
> /Full list of resources:/
>
> / /
>
> /ClusterIP  (ocf::heartbeat:IPaddr2):   Started monnod01/
>
> /Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]/
>
> / Masters: [ monnod01 ]/
>
> / Stopped: [ monquor ]/
>
>  
>
> Resource created with the following
>
>  
>
> /pcs -f drbd_cfg resource create drbd-demo-resource ocf:linbit:drbd 
> drbd_resource=drbd-demo op monitor interval=10s/
>
> /pcs -f drbd_cfg resource master drbd-demo-resource-clone 
> drbd-demo-resource master-max=1 master-node-max=1 clone-max=2
> clone-node-max=1 notify=true/
>
>  
>
> Even though I've used location constraints on the master slave 
> resource allowing it only access to the 2 nodes.
>
>  
>
> /[root@monnod01 ~]# pcs constraint/
>
> /  Resource: drbd-demo-resource-clone/
>
> /Enabled on: monnod01 (score:INFINITY)/
>
> /Enabled on: monnod02 (score:INFINITY)/
>
>  
>
> The actual failover itself works, so it activates the DRBD disk, 
> mounts it and starts up the db service which access the files on this 
> drbd disk.
>
> But since the slave drbd is never started, it'll never actually 
> perform the drbd sync between the disks.
>
> What am I missing to actually make the master/slave resource ignore 
> the 3 node and startup the master and slave resource ?
>
> Does DRBD need to be installed on the 3th node as well ?
>
>  
>
> I've put the complete output of the commands in the attachment of the 
> mail.
>
>  
>
> Met vriendelijke groeten / Meilleures salutations / Best regards
>
> *Kristof Bruyninckx*
> *System Engineer*
>
>  
>
>  
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org 
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org 
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Failover question

2017-03-15 Thread Frank Fiene
Hi,

Another beginner question:

I have configured a virtual IP resource on two hosts and an apache resource 
cloned on both machines like this

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 params ip= op 
monitor interval=10s
pcs resource create proxy lsb:apache2 
statusurl="http://127.0.0.1/server-status"; op monitor interval=15s clone


Will the IP failover if the Apache server on the Master has a problem?
The Apache is just acting as a proxy, so I thought it would be faster to have 
it already running on both machines.


Kind Regards! Frank
— 
Frank Fiene
IT-Security Manager VEKA Group

Fon: +49 2526 29-6200
Fax: +49 2526 29-16-6200
mailto: ffi...@veka.com
http://www.veka.com

PGP-ID: 62112A51
PGP-Fingerprint: 7E12 D61B 40F0 212D 5A55 765D 2A3B B29B 6211 2A51
Threema: VZK5NDWW

VEKA AG
Dieselstr. 8
48324 Sendenhorst
Deutschland/Germany

Vorstand/Executive Board: Andreas Hartleif (Vorsitzender/CEO),
Dr. Andreas W. Hillebrand, Bonifatius Eichwald, Elke Hartleif, Dr. Werner 
Schuler,
Vorsitzender des Aufsichtsrates/Chairman of Supervisory Board: Ulrich Weimer
HRB 8282 AG Münster/District Court of Münster


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Sync Apache config files

2017-03-15 Thread Frank Fiene
Thank you all.

Done with csync2.


Frank

> Am 14.03.2017 um 01:33 schrieb Victor José Acosta Domínguez 
> :
> 
> You have many options to do that:
> - lsyncd+csync2/rsync
> - cron+rsync
> - glusterfs replicated f's
> 
> Lsyncd + csync2 is a good option if you need near to real time replication
> 
> Cron + rsync is a good option for scheduled replication
> 
> Glusterfs is good for real time replication
> For 2+ nodes
> 
> Regards 
> 
> 
> 
> El mar. 13, 2017 3:30 PM, "Dimitri Maziuk"  escribió:
> On 03/13/2017 12:38 PM, Frank Fiene wrote:
> > Hmm, Puppet is also a good idea. Thanks.
> >
> > I haven’t tried because we have not so many Linux servers. Just a handful.
> 
> It depends on how much data there is and what the update model is. For
> larger datasets zfs incremental snapshots work best IME, although on our
> two-node active/passive pairs I haven't had any problems with DRBD
> either -- as long as it's not exported via nfs on centos
> 7/corosync/pacemaker.
> 
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

Viele Grüße!
i.A. Frank Fiene
-- 
Frank Fiene
IT-Security Manager VEKA Group

Fon: +49 2526 29-6200
Fax: +49 2526 29-16-6200
mailto: ffi...@veka.com
http://www.veka.com

PGP-ID: 62112A51
PGP-Fingerprint: 7E12 D61B 40F0 212D 5A55 765D 2A3B B29B 6211 2A51
Threema: VZK5NDWW

VEKA AG
Dieselstr. 8
48324 Sendenhorst
Deutschland/Germany

Vorstand/Executive Board: Andreas Hartleif (Vorsitzender/CEO),
Dr. Andreas W. Hillebrand, Bonifatius Eichwald, Elke Hartleif, Dr. Werner 
Schuler,
Vorsitzender des Aufsichtsrates/Chairman of Supervisory Board: Ulrich Weimer
HRB 8282 AG Münster/District Court of Münster


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Master/Slave DRBD not active on asymmetric cluster

2017-03-15 Thread Klaus Wenninger
Hi!

I guess the collocation with ClusterIP is the culprit.
It leads to the clone not being started where ClusterIP
is not running.
Guess what you'd rather want is a collocation with
just the master-role of the clone.

Regards,
Klaus

On 03/14/2017 03:44 PM, Bruyninckx Kristof wrote:
>
> Hello,
>
>  
>
> Currently I’ve tried to setup a 3 node asymmetric cluster, with the
> 3th node only being used as a tie breaker.
>
>  
>
> monnod01 & monnod02 :
>
> ·centos 7.3
>
> ·pacemaker-1.1.15-11.el7_3.2.x86_64,
>
> ·corosync-2.4.0-4.el7.x86_64
>
> ·drbd84-utils-8.9.8-1.el7.elrepo.x86_64
>
> ·PostgreSQL 9.4
>
> monquor:
>
> ·centos 7.3
>
> ·pacemaker-1.1.15-11.el7_3.2.x86_64
>
> ·corosync-2.4.0-4.el7.x86_64
>
> ·no drbd installed.
>
>  
>
> Now I’ve noticed that the master/slave drbd resource only activates
> the master side and not also the slave side allowing the drbd to
> actually sync between each other. I’ve setup a 2 node cluster, and
> there it works without any issue.
>
> But when I try to do the same, but with a 3th node, and
>
>  
>
> /pcs property set symmetric-cluster=false/
>
>  
>
> For some reason it keeps adding the 3the node as a stopped resource in
> the master slave setup and it doesn’t mention a slave resource.
>
>  
>
> /pcs status/
>
> /Online: [ monnod01 monnod02 monquor ]/
>
> / /
>
> /Full list of resources:/
>
> / /
>
> /ClusterIP  (ocf::heartbeat:IPaddr2):   Started monnod01/
>
> /Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]/
>
> / Masters: [ monnod01 ]/
>
> / Stopped: [ monquor ]/
>
>  
>
> Resource created with the following
>
>  
>
> /pcs -f drbd_cfg resource create drbd-demo-resource ocf:linbit:drbd
> drbd_resource=drbd-demo op monitor interval=10s/
>
> /pcs -f drbd_cfg resource master drbd-demo-resource-clone
> drbd-demo-resource master-max=1 master-node-max=1 clone-max=2
> clone-node-max=1 notify=true/
>
>  
>
> Even though I’ve used location constraints on the master slave
> resource allowing it only access to the 2 nodes.
>
>  
>
> /[root@monnod01 ~]# pcs constraint/
>
> /  Resource: drbd-demo-resource-clone/
>
> /Enabled on: monnod01 (score:INFINITY)/
>
> /Enabled on: monnod02 (score:INFINITY)/
>
>  
>
> The actual failover itself works, so it activates the DRBD disk,
> mounts it and starts up the db service which access the files on this
> drbd disk.
>
> But since the slave drbd is never started, it’ll never actually
> perform the drbd sync between the disks.
>
> What am I missing to actually make the master/slave resource ignore
> the 3 node and startup the master and slave resource ?
>
> Does DRBD need to be installed on the 3th node as well ?
>
>  
>
> I’ve put the complete output of the commands in the attachment of the
> mail.
>
>  
>
> Met vriendelijke groeten / Meilleures salutations / Best regards
>
> *Kristof Bruyninckx*
> *System Engineer*
>
>  
>
>  
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] wireshark cannot recognize corosync packets

2017-03-15 Thread Jan Friesse

Yesterday I found corosync took almost one hour to form a cluster(a
failed node came back online).


This for sure shouldn't happen (at least with default timeout settings).



So I captured some corosync packets, and opened the pcap file in wireshark.

But wireshark only displayed raw udp, no totem.

Wireshark version is 2.2.5. I'm sure it supports corosync totem.

corosync is 2.4.0.


Wireshark has corosync dissector, but only for version 1.x. 2.x is not 
supported yet.




And if corosync takes too long to form a cluster, how to diagnose it?

I read the logs, but could not figure it out.


Logs, specially when debug is enabled, has usually enough info. Can 
paste your config + logs?


Regards,
  Honza



Thanks.



___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Master/Slave DRBD not active on asymmetric cluster

2017-03-15 Thread Bruyninckx Kristof
Hello,

Currently I've tried to setup a 3 node asymmetric cluster, with the 3th node 
only being used as a tie breaker.

monnod01 & monnod02 :

*centos 7.3

*pacemaker-1.1.15-11.el7_3.2.x86_64,

*corosync-2.4.0-4.el7.x86_64

*drbd84-utils-8.9.8-1.el7.elrepo.x86_64

*PostgreSQL 9.4
monquor:

*centos 7.3

*pacemaker-1.1.15-11.el7_3.2.x86_64

*corosync-2.4.0-4.el7.x86_64

*no drbd installed.

Now I've noticed that the master/slave drbd resource only activates the master 
side and not also the slave side allowing the drbd to actually sync between 
each other. I've setup a 2 node cluster, and there it works without any issue.
But when I try to do the same, but with a 3th node, and

pcs property set symmetric-cluster=false

For some reason it keeps adding the 3the node as a stopped resource in the 
master slave setup and it doesn't mention a slave resource.

pcs status
Online: [ monnod01 monnod02 monquor ]

Full list of resources:

ClusterIP  (ocf::heartbeat:IPaddr2):   Started monnod01
Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]
 Masters: [ monnod01 ]
 Stopped: [ monquor ]

Resource created with the following

pcs -f drbd_cfg resource create drbd-demo-resource ocf:linbit:drbd 
drbd_resource=drbd-demo op monitor interval=10s
pcs -f drbd_cfg resource master drbd-demo-resource-clone drbd-demo-resource 
master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

Even though I've used location constraints on the master slave resource 
allowing it only access to the 2 nodes.

[root@monnod01 ~]# pcs constraint
  Resource: drbd-demo-resource-clone
Enabled on: monnod01 (score:INFINITY)
Enabled on: monnod02 (score:INFINITY)

The actual failover itself works, so it activates the DRBD disk, mounts it and 
starts up the db service which access the files on this drbd disk.
But since the slave drbd is never started, it'll never actually perform the 
drbd sync between the disks.
What am I missing to actually make the master/slave resource ignore the 3 node 
and startup the master and slave resource ?
Does DRBD need to be installed on the 3th node as well ?

I've put the complete output of the commands in the attachment of the mail.

Met vriendelijke groeten / Meilleures salutations / Best regards

Kristof Bruyninckx
System Engineer



[root@monnod01 cluster]# pcs status
Cluster name: ha-zenoss-cluster
Stack: corosync
Current DC: monquor (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Mar 14 15:40:32 2017  Last change: Tue Mar 14 
15:03:46 2017 by root via crm_attribute on monnod02

3 nodes and 9 resources configured

Online: [ monnod01 monnod02 monquor ]

Full list of resources:

 ClusterIP  (ocf::heartbeat:IPaddr2):   Started monnod01
 Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]
 Masters: [ monnod01 ]
 Stopped: [ monquor ]
 db-data(ocf::heartbeat:Filesystem):Started monnod01
 vcenter-fence  (stonith:fence_vmware_soap):Started monquor
 pgsql_service  (ocf::heartbeat:pgsql): Started monnod01
 vcenter-fence-monquor  (stonith:fence_vmware_soap):Started monnod02
 vcenter-fence-monnod1  (stonith:fence_vmware_soap):Started monnod02
 vcenter-fence-monnod2  (stonith:fence_vmware_soap):Started monnod01

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
[root@monnod01 cluster]#



[root@monnod01 cluster]# pcs resource --full
 Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=212.113.69.90 cidr_netmask=26
  Operations: start interval=0s timeout=20s (ClusterIP-start-interval-0s)
  stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
  monitor interval=5s (ClusterIP-monitor-interval-5s)
 Master: drbd-demo-resource-clone
  Meta Attrs: master-node-max=1 clone-max=2 notify=true master-max=1 
clone-node-max=1
  Resource: drbd-demo-resource (class=ocf provider=linbit type=drbd)
   Attributes: drbd_resource=drbd-demo
   Operations: start interval=0s timeout=240 
(drbd-demo-resource-start-interval-0s)
   promote interval=0s timeout=90 
(drbd-demo-resource-promote-interval-0s)
   demote interval=0s timeout=90 
(drbd-demo-resource-demote-interval-0s)
   stop interval=0s timeout=100 
(drbd-demo-resource-stop-interval-0s)
   monitor interval=10s (drbd-demo-resource-monitor-interval-10s)
 Resource: db-data (class=ocf provider=heartbeat type=Filesystem)
  Attributes: device=/dev/drbd0 directory=/var/lib/pgsql fstype=ext4 
options=noatime
  Operations: start interval=0s timeout=60 (db-data-start-interval-0s)
  stop interval=0s timeout=60 (db-data-stop-interval-0s)
  monitor interval=20 timeout=40 (db-data-monitor-interval-20)
 Resource: pgsql_service (class=ocf provider=heartbeat type=pgsql)
  Attributes: pgctl=/usr/pgsql-9.4/bin/pg_ctl psql=/usr/bin/ps