[Pacemaker] order constraint based on any one of many

2010-08-23 Thread Patrick Irvine

Hi,

I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set.

Pacemaker ver. 1.0.9.1

With Glusterfs I have 4 nodes serving replicated (RAID1) storage 
back-ends and up to 5 servers mounting the store.  With out getting into 
the specifics of how Gluster works, simply put, as long as any one of 
the 4 backed nodes are running, all of the 5 servers will be able to 
mount the store.


I have started setting up a testing cluster and have the following: (crm 
configure show output)


node test1
node test2
node test3
node test4
primitive glfs ocf:cybersites:glusterfs \
params volfile=repstore.vol mount_dir=/home \
op monitor interval=10s timeout=30
primitive glfsd-1 ocf:cybersites:glusterfsd \
params volfile=glfs.vol \
op monitor interval=10s timeout=30 \
meta target-role=Started
primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \
params ip=192.168.5.221 nic=eth1 cidr_netmask=24 \
op monitor interval=5s
primitive glfsd-2 ocf:cybersites:glusterfsd \
params volfile=glfs.vol \
op monitor interval=10s timeout=30 \
meta target-role=Started
primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \
params ip=192.168.5.222 nic=eth1 cidr_netmask=24 \
op monitor interval=5s \
meta target-role=Started
primitive glfsd-3 ocf:cybersites:glusterfsd \
params volfile=glfs.vol \
op monitor interval=10s timeout=30 \
meta target-role=Started
primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \
params ip=192.168.5.223 nic=eth1 cidr_netmask=24 \
op monitor interval=5s
primitive glfsd-4 ocf:cybersites:glusterfsd \
params volfile=glfs.vol \
op monitor interval=10s timeout=30 \
meta target-role=Started
primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \
params ip=192.168.5.224 nic=eth1 cidr_netmask=24 \
op monitor interval=5s
group glfsd-1-GROUP glfsd-1-IP glfsd-1
group glfsd-2-GROUP glfsd-2-IP glfsd-2
group glfsd-3-GROUP glfsd-3-IP glfsd-3
group glfsd-4-GROUP glfsd-4-IP glfsd-4
clone clone-glfs glfs \
meta clone-max=4 clone-node-max=1 target-role=Started
location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2
location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3
location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4
location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1
location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3
location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4
location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1
location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2
location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4
location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1
location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2
location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3


now I need a way of saying that clone-glfs can start once any of 
glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started.


Any ideas.  I have read the crm cli document, as well of many iterations 
of the clusters from scratch, etc.


I just can't seem to find an answer.  can it be done?

Pat.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] order constraint based on any one of many

2010-08-23 Thread Vishal
From what I have read and understood a clone will run simultaneously  
on all the nodes specified in the config . I donot see how a clone  
will run once a resource group is started . You can alternatively add  
the clone to each group and ensure when either group starts , the  
clone runs with it.


On Aug 23, 2010, at 11:50 AM, Patrick Irvine p...@cybersites.ca wrote:


Hi,

I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set.

Pacemaker ver. 1.0.9.1

With Glusterfs I have 4 nodes serving replicated (RAID1) storage  
back-ends and up to 5 servers mounting the store.  With out getting  
into the specifics of how Gluster works, simply put, as long as any  
one of the 4 backed nodes are running, all of the 5 servers will be  
able to mount the store.


I have started setting up a testing cluster and have the following:  
(crm configure show output)


node test1
node test2
node test3
node test4
primitive glfs ocf:cybersites:glusterfs \
  params volfile=repstore.vol mount_dir=/home \
  op monitor interval=10s timeout=30
primitive glfsd-1 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.221 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s
primitive glfsd-2 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.222 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s \
  meta target-role=Started
primitive glfsd-3 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.223 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s
primitive glfsd-4 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.224 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s
group glfsd-1-GROUP glfsd-1-IP glfsd-1
group glfsd-2-GROUP glfsd-2-IP glfsd-2
group glfsd-3-GROUP glfsd-3-IP glfsd-3
group glfsd-4-GROUP glfsd-4-IP glfsd-4
clone clone-glfs glfs \
  meta clone-max=4 clone-node-max=1 target-role=Started
location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2
location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3
location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4
location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1
location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3
location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4
location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1
location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2
location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4
location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1
location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2
location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3


now I need a way of saying that clone-glfs can start once any of  
glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started.


Any ideas.  I have read the crm cli document, as well of many  
iterations of the clusters from scratch, etc.


I just can't seem to find an answer.  can it be done?

Pat.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


[Pacemaker] Updated openSUSE packages in network:ha-clustering repo

2010-08-23 Thread Tim Serong
Hi All,

Just a quick note for openSUSE users - there's updated packages now
in the network:ha-clustering and network:ha-clustering:Factory repos,
build for SLE 11, SLE 11 SP1, openSUSE 11.2, openSUSE 11.3 and Factory:

  http://download.opensuse.org/repositories/network:/ha-clustering/
  http://download.opensuse.org/repositories/network:/ha-clustering:/Factory/

This includes:

  - cluster-glue 1.0.6
  - corosync 1.2.7
  - csync2 1.34
  - hawk 0.3.5
  - ldirectord 1.0.3
  - libdlm 3.00.01
  - ocfs2-tools 1.4.3
  - openais 1.1.3
  - pacemaker 1.1.2.1
  - pacemaker-mgmt 2.0.0
  - resource-agents 1.0.3

Happy clustering,

Tim


-- 
Tim Serong tser...@novell.com
Senior Clustering Engineer, OPS Engineering, Novell Inc.




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


[Pacemaker] Best way to find master node

2010-08-23 Thread Bob Schatz
I would like to find the master node for a resource.

On 1.0.9.1, when I do:

# crm resource status ms-SSWD-WCAW30019072
resource ms-SSWD-WCAW30019072 is  running on: box-0
resource ms-SSWD-WCAW30019072 is  running on: box-1

This does not tell me if it is master or slave.

I found this thread:

http://www.gossamer-threads.com/lists/linuxha/pacemaker/60434?search_string=crm_resource%20master%20;#60434


but I could not find a bug filed.

Can I file a bug for this?  Would it be on crm_resource?

Is there any workaround for this?


Thanks,

Bob


  

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


Re: [Pacemaker] order constraint based on any one of many

2010-08-23 Thread Patrick Irvine

Hi Vishal  list,

Thanks for the info.  Unfortuantly that won't due since this clone 
(glfs) is the actual mounting of the user's home directorys and needs to 
be mounted whither the local glfsd(server) is running or not.  I do 
think I have a solution, it's some what of a hack.


If I turn my glfsd-x (servers) into a single master with multiple slaves 
(cloned resources) then I could order the glfs(client) clone after the 
master starts


ie.

order glfs-after-glfsd-ORDERinf: clone-glfsd:master clone-glfs

this would achive what I want I think,

I would ofcourse have to insure that even if only one of the glfs-x 
servers is running, it would be master.


any comments?  (and was I understandable?)

Pat.

On 23/08/2010 12:49 AM, Vishal wrote:
From what I have read and understood a clone will run simultaneously 
on all the nodes specified in the config . I donot see how a clone 
will run once a resource group is started . You can alternatively add 
the clone to each group and ensure when either group starts , the 
clone runs with it.


On Aug 23, 2010, at 11:50 AM, Patrick Irvine p...@cybersites.ca wrote:


Hi,

I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set.

Pacemaker ver. 1.0.9.1

With Glusterfs I have 4 nodes serving replicated (RAID1) storage 
back-ends and up to 5 servers mounting the store.  With out getting 
into the specifics of how Gluster works, simply put, as long as any 
one of the 4 backed nodes are running, all of the 5 servers will be 
able to mount the store.


I have started setting up a testing cluster and have the following: 
(crm configure show output)


node test1
node test2
node test3
node test4
primitive glfs ocf:cybersites:glusterfs \
  params volfile=repstore.vol mount_dir=/home \
  op monitor interval=10s timeout=30
primitive glfsd-1 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.221 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s
primitive glfsd-2 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.222 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s \
  meta target-role=Started
primitive glfsd-3 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.223 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s
primitive glfsd-4 ocf:cybersites:glusterfsd \
  params volfile=glfs.vol \
  op monitor interval=10s timeout=30 \
  meta target-role=Started
primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \
  params ip=192.168.5.224 nic=eth1 cidr_netmask=24 \
  op monitor interval=5s
group glfsd-1-GROUP glfsd-1-IP glfsd-1
group glfsd-2-GROUP glfsd-2-IP glfsd-2
group glfsd-3-GROUP glfsd-3-IP glfsd-3
group glfsd-4-GROUP glfsd-4-IP glfsd-4
clone clone-glfs glfs \
  meta clone-max=4 clone-node-max=1 target-role=Started
location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2
location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3
location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4
location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1
location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3
location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4
location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1
location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2
location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4
location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1
location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2
location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3


now I need a way of saying that clone-glfs can start once any of 
glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started.


Any ideas.  I have read the crm cli document, as well of many 
iterations of the clusters from scratch, etc.


I just can't seem to find an answer.  can it be done?

Pat.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: 
http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker 




___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: 
http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker





___