[Pacemaker] [SOLVED] Re: Multicast corosync packets and default route

2014-11-07 Thread Daniel Dehennin
Daniel Dehennin daniel.dehen...@baby-gnu.org writes:

 Daniel Dehennin daniel.dehen...@baby-gnu.org writes:

 Hello,


 [...]

 I only manage to have my VM as corosync member like others when default
 the route is on the same interface as my multicast traffic.

[...]


Using tcpdump I found the difference between single card VM and multicard VM.

When using multiple cards, I need to force the IGMP version since my
physical switches does not support IGMPv3.

It looks like the kernel uses IGMPv3 to register any local IP addresses
to the multicast group.


Single card VM:

No.  Time  Source   Destination  Protocol Info
  2  0.000985  192.168.231.110  226.94.1.1   IGMPv2   Membership Report 
group 226.94.1.1

Frame 2: 46 bytes on wire (368 bits), 46 bytes captured (368 bits)
Ethernet II, Src: RealtekU_03:6d:2d (52:54:00:03:6d:2d), Dst: 
IPv4mcast_5e:01:01 (01:00:5e:5e:01:01)
Internet Protocol Version 4, Src: 192.168.231.110 (192.168.231.110), Dst: 
226.94.1.1 (226.94.1.1)
Internet Group Management Protocol
[IGMP Version: 2]
Type: Membership Report (0x16)
Max Resp Time: 0,0 sec (0x00)
Header checksum: 0x06a0 [correct]
Multicast Address: 226.94.1.1 (226.94.1.1)

Multicard VM:

No.  Time  Source   Destination  Protocol Info
  2  0.004419  192.168.231.111  224.0.0.22   IGMPv3   Membership Report / 
Join group 226.94.1.1 for any sources

Frame 2: 54 bytes on wire (432 bits), 54 bytes captured (432 bits)
Ethernet II, Src: RealtekU_dc:b6:92 (52:54:00:dc:b6:92), Dst: IPv4mcast_16 
(01:00:5e:00:00:16)
Internet Protocol Version 4, Src: 192.168.231.111 (192.168.231.111), Dst: 
224.0.0.22 (224.0.0.22)
Internet Group Management Protocol
[IGMP Version: 3]
Type: Membership Report (0x22)
Header checksum: 0xf69e [correct]
Num Group Records: 1
Group Record : 226.94.1.1  Change To Exclude Mode
Record Type: Change To Exclude Mode (4)
Aux Data Len: 0
Num Src: 0
Multicast Address: 226.94.1.1 (226.94.1.1)

So I force the IGMP version for all interfaces with the following:

sysctl -w net.ipv4.conf.all.force_igmp_version=2

Now my dual card VM is part of the ring:

root@nebula3:~# corosync-quorumtool
Quorum information
--
Date: Fri Nov  7 16:32:34 2014
Quorum provider:  corosync_votequorum
Nodes:5
Node ID:  1084811080
Ring ID:  20624
Quorate:  Yes

Votequorum information
--
Expected votes:   5
Highest expected: 5
Total votes:  5
Quorum:   3
Flags:Quorate WaitForAll LastManStanding

Membership information
--
Nodeid  Votes Name
1084811078  1 nebula1.eole.lan
1084811079  1 nebula2.eole.lan
1084811080  1 nebula3.eole.lan (local)
108488  1 quorum.eole.lan
108489  1 one-frontend.eole.lan

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF


signature.asc
Description: PGP signature
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-07 Thread Daniel Dehennin
Hello,

As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
are running on it.

Now I have one issue, when the bare metal host on which the VM is
running die, the VM is lost and can not be fenced.

Is there a way to make pacemaker ACK the fencing of the VM running on a
host when the host is fenced itself?

Regards.

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF


signature.asc
Description: PGP signature
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] How to avoid CRM sending stop when ha.cf gets 2nd node configured

2014-11-07 Thread aridh bose
Hi,
While using heartbeat and pacemaker, is it possible to bringup first node which 
can go as Master, followed by second node which should go as Slave without 
causing any issues to the first node? Currently, I see a  couple of problems in 
achieving this:1. Assuming I am not using mcast communication, heartbeat is 
mandating me to configure second node info either in ha.cf or in /etc/hosts 
file with associated IP address. Why can't it come up by itself as Master to 
start with?
2. If I update ha.cf with the 2nd node info and use 'heartbeat -r' CRM first 
sends stop on the Master before sending start.
Appreciate any help or pointers.
Thanks,
Aridbh.___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-07 Thread Andrei Borzenkov
В Fri, 07 Nov 2014 17:46:40 +0100
Daniel Dehennin daniel.dehen...@baby-gnu.org пишет:

 Hello,
 
 As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
 are running on it.
 
 Now I have one issue, when the bare metal host on which the VM is
 running die, the VM is lost and can not be fenced.
 
 Is there a way to make pacemaker ACK the fencing of the VM running on a
 host when the host is fenced itself?
 

Yes, you can define multiple stonith agents and priority between them.

http://clusterlabs.org/wiki/Fencing_topology

 Regards.
 



signature.asc
Description: PGP signature
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org