Re: [Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-10 Thread Daniel Dehennin
Andrei Borzenkov arvidj...@gmail.com writes:


[...]

 Now I have one issue, when the bare metal host on which the VM is
 running die, the VM is lost and can not be fenced.
 
 Is there a way to make pacemaker ACK the fencing of the VM running on a
 host when the host is fenced itself?
 

 Yes, you can define multiple stonith agents and priority between them.

 http://clusterlabs.org/wiki/Fencing_topology

Hello,

If I understand correctly, fencing topology is the way to have several
fencing devices for a node and try them consecutively until one works.

In my configuration, I group the VM stonith agents with the
corresponding VM resource, to make them move together[1].

Here is my use case:

1. Resource ONE-Frontend-Group runs on nebula1
2. nebula1 is fenced
3. node one-fronted can not be fenced

Is there a way to say that the life on node one-frontend is related to
the state of resource ONE-Frontend?

In which case when the node nebula1 is fenced, pacemaker should be aware that
resource ONE-Frontend is not running any more, so node one-frontend is
OFFLINE and not UNCLEAN.

Regards.

Footnotes: 
[1]  http://oss.clusterlabs.org/pipermail/pacemaker/2014-October/022671.html

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF

node $id=1084811078 nebula1
node $id=1084811079 nebula2
node $id=1084811080 nebula3
node $id=108488 quorum \
attributes standby=on
node $id=108489 one-frontend
primitive ONE-Datastores ocf:heartbeat:Filesystem \
params device=/dev/one-fs/datastores 
directory=/var/lib/one/datastores fstype=gfs2 \
op start interval=0 timeout=90 \
op stop interval=0 timeout=100 \
op monitor interval=20 timeout=40
primitive ONE-Frontend ocf:heartbeat:VirtualDomain \
params config=/var/lib/one/datastores/one/one.xml \
op start interval=0 timeout=90 \
op stop interval=0 timeout=100 \
utilization cpu=1 hv_memory=1024
primitive ONE-vg ocf:heartbeat:LVM \
params volgrpname=one-fs \
op start interval=0 timeout=30 \
op stop interval=0 timeout=30 \
op monitor interval=60 timeout=30
primitive Quorum-Node ocf:heartbeat:VirtualDomain \
params config=/var/lib/libvirt/qemu/pcmk/quorum.xml \
op start interval=0 timeout=90 \
op stop interval=0 timeout=100 \
utilization cpu=1 hv_memory=1024
primitive Stonith-ONE-Frontend stonith:external/libvirt \
params hostlist=one-frontend hypervisor_uri=qemu:///system 
pcmk_host_list=one-frontend pcmk_host_check=static-list \
op monitor interval=30m
primitive Stonith-Quorum-Node stonith:external/libvirt \
params hostlist=quorum hypervisor_uri=qemu:///system 
pcmk_host_list=quorum pcmk_host_check=static-list \
op monitor interval=30m
primitive Stonith-nebula1-IPMILAN stonith:external/ipmi \
params hostname=nebula1-ipmi ipaddr=XXX.XXX.XXX.XXX 
interface=lanplus userid=USER passwd=PASSWORD1 passwd_method=env 
priv=operator pcmk_host_list=nebula1 pcmk_host_check=static-list \
op monitor interval=30m \
meta target-role=Started
primitive Stonith-nebula2-IPMILAN stonith:external/ipmi \
params hostname=nebula2-ipmi ipaddr=YYY.YYY.YYY.YYY 
interface=lanplus userid=USER passwd=PASSWORD2 passwd_method=env 
priv=operator pcmk_host_list=nebula2 pcmk_host_check=static-list \
op monitor interval=30m \
meta target-role=Started
primitive Stonith-nebula3-IPMILAN stonith:external/ipmi \
params hostname=nebula3-ipmi ipaddr=ZZZ.ZZZ.ZZZ.ZZZ 
interface=lanplus userid=USER passwd=PASSWORD3 passwd_method=env 
priv=operator pcmk_host_list=nebula3 pcmk_host_check=static-list \
op monitor interval=30m \
meta target-role=Started
primitive clvm ocf:lvm2:clvmd \
op start interval=0 timeout=90 \
op stop interval=0 timeout=100 \
op monitor interval=60 timeout=90
primitive dlm ocf:pacemaker:controld \
op start interval=0 timeout=90 \
op stop interval=0 timeout=100 \
op monitor interval=60 timeout=60
group ONE-Frontend-Group Stonith-ONE-Frontend ONE-Frontend \
meta target-role=Started
group ONE-Storage dlm clvm ONE-vg ONE-Datastores
group Quorum-Node-Group Stonith-Quorum-Node Quorum-Node \
meta target-role=Started
clone ONE-Storage-Clone ONE-Storage \
meta interleave=true target-role=Started
location Nebula1-does-not-fence-itslef Stonith-nebula1-IPMILAN \
rule $id=Nebula1-does-not-fence-itslef-rule 50: #uname eq nebula2 \
rule $id=Nebula1-does-not-fence-itslef-rule-0 40: #uname eq nebula3
location Nebula2-does-not-fence-itslef Stonith-nebula2-IPMILAN \
rule $id=Nebula2-does-not-fence-itslef-rule 50: #uname eq nebula3 \
rule $id=Nebula2-does-not-fence-itslef-rule-0 40: #uname eq nebula1
location Nebula3-does-not-fence-itslef Stonith-nebula3-IPMILAN \
rule 

Re: [Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-10 Thread Tomasz Kontusz
I think the suggestion was to put shooting the host in the fencing path of a 
VM. This way if you can't get the host to fence the VM (as the host is already 
dead) you just check if the host was fenced.

Daniel Dehennin daniel.dehen...@baby-gnu.org napisał:
Andrei Borzenkov arvidj...@gmail.com writes:


[...]

 Now I have one issue, when the bare metal host on which the VM is
 running die, the VM is lost and can not be fenced.
 
 Is there a way to make pacemaker ACK the fencing of the VM running
on a
 host when the host is fenced itself?
 

 Yes, you can define multiple stonith agents and priority between
them.

 http://clusterlabs.org/wiki/Fencing_topology

Hello,

If I understand correctly, fencing topology is the way to have several
fencing devices for a node and try them consecutively until one works.

In my configuration, I group the VM stonith agents with the
corresponding VM resource, to make them move together[1].

Here is my use case:

1. Resource ONE-Frontend-Group runs on nebula1
2. nebula1 is fenced
3. node one-fronted can not be fenced

Is there a way to say that the life on node one-frontend is related to
the state of resource ONE-Frontend?

In which case when the node nebula1 is fenced, pacemaker should be
aware that
resource ONE-Frontend is not running any more, so node one-frontend is
OFFLINE and not UNCLEAN.

Regards.

Footnotes: 
[1] 
http://oss.clusterlabs.org/pipermail/pacemaker/2014-October/022671.html

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF





___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started:
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

-- 
Wysłane za pomocą K-9 Mail.___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-10 Thread Andrei Borzenkov
В Mon, 10 Nov 2014 10:07:18 +0100
Tomasz Kontusz tomasz.kont...@gmail.com пишет:

 I think the suggestion was to put shooting the host in the fencing path of a 
 VM. This way if you can't get the host to fence the VM (as the host is 
 already dead) you just check if the host was fenced.
 

Exactly. One thing I do not know how it will behave in case of multiple
VMs on the same host. I.e. will pacemaker try to fence host for every
VM or recognize that all VMs are dead after the first time agent is
invoked.

 Daniel Dehennin daniel.dehen...@baby-gnu.org napisał:
 Andrei Borzenkov arvidj...@gmail.com writes:
 
 
 [...]
 
  Now I have one issue, when the bare metal host on which the VM is
  running die, the VM is lost and can not be fenced.
  
  Is there a way to make pacemaker ACK the fencing of the VM running
 on a
  host when the host is fenced itself?
  
 
  Yes, you can define multiple stonith agents and priority between
 them.
 
  http://clusterlabs.org/wiki/Fencing_topology
 
 Hello,
 
 If I understand correctly, fencing topology is the way to have several
 fencing devices for a node and try them consecutively until one works.
 
 In my configuration, I group the VM stonith agents with the
 corresponding VM resource, to make them move together[1].
 
 Here is my use case:
 
 1. Resource ONE-Frontend-Group runs on nebula1
 2. nebula1 is fenced
 3. node one-fronted can not be fenced
 
 Is there a way to say that the life on node one-frontend is related to
 the state of resource ONE-Frontend?
 
 In which case when the node nebula1 is fenced, pacemaker should be
 aware that
 resource ONE-Frontend is not running any more, so node one-frontend is
 OFFLINE and not UNCLEAN.
 
 Regards.
 
 Footnotes: 
 [1] 
 http://oss.clusterlabs.org/pipermail/pacemaker/2014-October/022671.html
 
 -- 
 Daniel Dehennin
 Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
 Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF
 
 
 
 
 
 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker
 
 Project Home: http://www.clusterlabs.org
 Getting started:
 http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org
 


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-07 Thread Daniel Dehennin
Hello,

As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
are running on it.

Now I have one issue, when the bare metal host on which the VM is
running die, the VM is lost and can not be fenced.

Is there a way to make pacemaker ACK the fencing of the VM running on a
host when the host is fenced itself?

Regards.

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF


signature.asc
Description: PGP signature
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Fencing dependency between bare metal host and its VMs guest

2014-11-07 Thread Andrei Borzenkov
В Fri, 07 Nov 2014 17:46:40 +0100
Daniel Dehennin daniel.dehen...@baby-gnu.org пишет:

 Hello,
 
 As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
 are running on it.
 
 Now I have one issue, when the bare metal host on which the VM is
 running die, the VM is lost and can not be fenced.
 
 Is there a way to make pacemaker ACK the fencing of the VM running on a
 host when the host is fenced itself?
 

Yes, you can define multiple stonith agents and priority between them.

http://clusterlabs.org/wiki/Fencing_topology

 Regards.
 



signature.asc
Description: PGP signature
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org