[Pacemaker] When one node is up cluster IP not working

2013-03-05 Thread erkin kabataş
Hi,

I have a configuration problem with heartbeat usging crm on option. I only
use cluster IP as resource. I have 3 nodes and when 3 or 2 nodes are up it
works great, I can reach active node via cluster IP, but when only one node
is up, I cannot reach that node via cluster IP.

I am using
 heartbeat-2.1.2-2.i386.rpm, heartbeat-pils-2.1.2-2.i386.rpm,
heartbeat-stonith-2.1.2-2.i386.rpm
packages on RHEL 5.5.

I also attached configuration files.

I am stucked with this problem, any help would be appreciated.

Thanks!
auth 2
2 sha1 HI!

 cib generated=false admin_epoch=0 epoch=0 num_updates=0 have_quorum=false ignore_dtd=false num_peers=3 cib-last-written=Tue Mar  5 01:21:02 2013
   configuration
 crm_config/
 nodes
   node id=08e63c8e-3da7-4a63-8fe9-e3f178d79f69 uname=egeo1.netas.com type=normal/
   node id=71b0477c-dd4b-47b7-bd40-627385f4878a uname=egeo2.netas.com type=normal/
   node id=2ca9db9e-b8e1-4a70-b779-6c8f8a2bf33e uname=egeo3.netas.com type=normal/
 /nodes
 resources
   primitive id=failover-ip class=ocf type=IPaddr provider=heartbeat
 operations
   op id=failover-ip-monitor name=monitor interval=10s/
 /operations
 instance_attributes id=failover-ip-attribs
   attributes
 nvpair id=failover-ip-addr name=ip value=47.168.90.158/
   /attributes
 /instance_attributes
   /primitive
 /resources
 constraints/
   /configuration
 /cib
 cib generated=false admin_epoch=0 epoch=0 num_updates=0 have_quorum=false ignore_dtd=false num_peers=3 cib-last-written=Tue Mar  5 02:44:41 2013
   configuration
 crm_config/
 nodes
   node id=08e63c8e-3da7-4a63-8fe9-e3f178d79f69 uname=egeo1.netas.com type=normal/
   node id=71b0477c-dd4b-47b7-bd40-627385f4878a uname=egeo2.netas.com type=normal/
   node id=2ca9db9e-b8e1-4a70-b779-6c8f8a2bf33e uname=egeo3.netas.com type=normal/
 /nodes
 resources
   primitive id=failover-ip class=ocf type=IPaddr provider=heartbeat
 operations
   op id=failover-ip-monitor name=monitor interval=10s/
 /operations
 instance_attributes id=failover-ip-attribs
   attributes
 nvpair id=failover-ip-addr name=ip value=47.168.90.158/
   /attributes
 /instance_attributes
   /primitive
 /resources
 constraints/
   /configuration
 /cib
 cib generated=false admin_epoch=0 epoch=0 num_updates=0 have_quorum=true ignore_dtd=false num_peers=3 cib-last-written=Tue Mar  5 02:42:16 2013
   configuration
 crm_config/
 nodes
   node id=08e63c8e-3da7-4a63-8fe9-e3f178d79f69 uname=egeo1.netas.com type=normal/
   node id=71b0477c-dd4b-47b7-bd40-627385f4878a uname=egeo2.netas.com type=normal/
   node id=2ca9db9e-b8e1-4a70-b779-6c8f8a2bf33e uname=egeo3.netas.com type=normal/
 /nodes
 resources
   primitive id=failover-ip class=ocf type=IPaddr provider=heartbeat
 operations
   op id=failover-ip-monitor name=monitor interval=10s/
 /operations
 instance_attributes id=failover-ip-attribs
   attributes
 nvpair id=failover-ip-addr name=ip value=47.168.90.158/
   /attributes
 /instance_attributes
   /primitive
 /resources
 constraints/
   /configuration
 /cib


ha.cf
Description: Binary data
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Fw: Fw: Fw: Cluster resources failing to move

2013-03-05 Thread Tommy Cooper
Thank you all for helping me, the problem appears to be solved 


- Forwarded Message -
From: emmanuel segura emi2f...@gmail.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 
Sent: Tuesday, March 5, 2013 12:21 AM
Subject: Re: [Pacemaker] Fw: Fw: Cluster resources failing to move


try with this

primitive p_asterisk ocf:heartbeat:asterisk \
meta migration-threshold=1 \
 params user=root group=root maxfiles=65536 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \
 op stop interval=0 timeout=30s


2013/3/5 Tommy Cooper tomcoope...@yahoo.com

This config did not work
 
primitive p_asterisk ocf:heartbeat:asterisk \
 params user=root group=root maxfiles=65536 \
 meta migration-threshold=1 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \

crm(live)configure# verify
   error: text2role:  Unknown role: Start
   error: get_target_role:  voip: Unknown value for target-role: Start
   error: text2role:  Unknown role: Start
   error: get_target_role:  voip: Unknown value for target-role: Start
   error: text2role:  Unknown role: Start
   error: get_target_role:  p_asterisk: Unknown value for target-role: Start
Errors found during check: config not valid



- Forwarded Message -
From: emmanuel segura emi2f...@gmail.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 

Sent: Monday, March 4, 2013 11:50 PM
Subject: Re: [Pacemaker] Fw: Cluster resources failing to move



it's should be

primitive p_asterisk ocf:heartbeat:asterisk \
 params user=root group=root maxfiles=65536 \
 meta migration-threshold=1 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \
 


2013/3/4 Tommy Cooper tomcoope...@yahoo.com

Is this the correct way to do it?
 
primitive p_asterisk ocf:heartbeat:asterisk \
 params user=root group=root maxfiles=65536 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \
 op stop interval=0 timeout=30s migration-threshold=1
 
I tried stopping the asterisk service using service asterisk stop. I repeated 
that for at least 4 times but the service keeps restarting on the same node


- Forwarded Message -
From: emmanuel segura emi2f...@gmail.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 

Sent: Monday, March 4, 2013 11:05 PM
Subject: Re: [Pacemaker] Cluster resources failing to move


From Suse Docs


7.4.2. Cleaning Up Resources¶
A resource will be automatically restarted if it fails, but each failure 
raises the resource's failcount. If a migration-threshold has been set for 
that resource, the node will no longer be allowed to run the resource as soon 
as the number of failures has reached the migration threshold. 


2013/3/4 Tommy Cooper tomcoope...@yahoo.com

I have removed the order and colocation statements but I am still getting the 
same results. Asterisk keeps restarting on the same server, how can I switch 
to the other server when asterisk fails? I used those statements to make sure 
that both services are running on the same server and to make sure that the 
virtual IP is started before asterisk.



- Forwarded Message -
From: Jake Smith jsm...@argotec.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 

Sent: Monday, March 4, 2013 10:00 PM
Subject: Re: [Pacemaker] Fw: Cluster resources failing to move



- Original Message -
 From: Tommy Cooper tomcoope...@yahoo.com
 To: pacemaker@oss.clusterlabs.org

 Sent: Monday, March 4, 2013 3:51:03 PM

 Subject: [Pacemaker] Fw:  Cluster resources failing to move
 
 
 
 
 Thank you for your prompt reply. I actually wanted to create an
 active/passive cluster, so if either the network or Asterisk fails
 these services could be migrated to the other server. As I already
 stated earlier, the current config notifies me if asterisk is down
 but does not start asterisk on the other server.


Did asterisk restart on the same server? - this is what I would expect 
pacemaker to do.

Removing the colocation (and order) statements didn't have any effect?

 
 

 - Forwarded Message -
 From: Jake Smith jsm...@argotec.com
 To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster
 resource manager pacemaker@oss.clusterlabs.org
 Sent: Monday, March 4, 2013 9:29 PM
 Subject: Re: [Pacemaker] Cluster resources failing to move
 
 
 - Original Message -
  From: Tommy Cooper  tomcoope...@yahoo.com 
  To: pacemaker@oss.clusterlabs.org
  Sent: Monday, March 4, 2013 2:19:22 PM
  Subject: [Pacemaker] Cluster resources failing to move
  
  
  
  
  Hi,
  
  
  I am trying to configure a 2 node cluster using pac emaker 1.1.7
  and
  corosync 1.4.1. I. I want pacemaker to provide the virual IP
  (192.168.1.115), monitor Asterisk (PBX) and failover to the othe
  server. If I switch 

[Pacemaker] Pacemaker delays (long posting)

2013-03-05 Thread Michael Powell
I have recently assumed the responsibility for maintaining code on one of my 
company's products that uses Pacemaker/Heartbeat.  I'm still coming up to speed 
on this code, and would like to solicit comments about some particular 
behavior.  For reference, the Pacemaker version is 1.0.9.1, and Heartbeat is 
version 3.0.3.

This product uses two host systems, each of which supports several disk 
enclosures, operating in an active/passive mode.  The two hosts are connected 
by redundant, dedicated 10Gb Ethernet links, which are used for messaging 
between them.  The disks in each enclosure are controlled by an instance of an 
application called SS.  If an active host's SS application fails for some 
reason, then the corresponding application on the passive host will assume 
control of the disks.  Each application is assigned a Pacemaker resource, and 
the resource agent communicates with the appropriate SS instance.  For 
reference, here's a sample crm_mon output:


Last updated: Tue Mar  5 06:10:22 2013
Stack: Heartbeat
Current DC: mgraid-12241530rn01433-0 (f4e5e15c-d06b-4e37-89b9-4621af05128f) - 
partition with quorum
Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
2 Nodes configured, unknown expected votes
9 Resources configured.


Online: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]

Clone Set: Fencing
 Started: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]
Clone Set: cloneIcms
 Started: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]
Clone Set: cloneOmserver
 Started: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]
Master/Slave Set: ms-SS11451532RN01389
 Masters: [ mgraid-12241530rn01433-1 ]
 Slaves: [ mgraid-12241530rn01433-0 ]
Master/Slave Set: ms-SS11481532RN01465
 Masters: [ mgraid-12241530rn01433-0 ]
 Slaves: [ mgraid-12241530rn01433-1 ]
Master/Slave Set: ms-SS12171532RN01613
 Masters: [ mgraid-12241530rn01433-0 ]
 Slaves: [ mgraid-12241530rn01433-1 ]
Master/Slave Set: ms-SS12241530RN01433
 Masters: [ mgraid-12241530rn01433-0 ]
 Slaves: [ mgraid-12241530rn01433-1 ]
Master/Slave Set: ms-SS12391532RN01768
 Masters: [ mgraid-12241530rn01433-0 ]
 Slaves: [ mgraid-12241530rn01433-1 ]
Master/Slave Set: ms-SS12391532RN01772
 Masters: [ mgraid-12241530rn01433-0 ]
 Slaves: [ mgraid-12241530rn01433-1 ]

I've been investigating the system's behavior when one or more master SS 
instances crashes, simulated by a kill command.  I've noticed two behaviors of 
interest.

First, in a simple case, where one master SS is killed, it takes about 10-12 
seconds for the slave to complete the failover.  From the log files, the DC 
issues the following notifications to the slave SS:

* Pre_notify_demote

* Post_notify_demote

* Pre_notify_stop

* Post_notify_stop

* Pre_notify_promote

* Promote

* Post_notify_promote

* Monitor_3000

* Pre_notify_start

* Post_notify_start

These notifications and their confirmations appear to take about 1-2 seconds 
each, begging the following questions:

* Is this sequence of notifications expected?

* Is the 10-12 second timeframe expected?

Second, in a more complex case, where the master SS for each instance is 
assigned to the same can, and each SS is in turn killed with an approximate 
10-second delay between kill commands, there appear to be very long delays in 
processing the notifications.  These delays appear to be associated with these 
factors

* After an SS instance is killed, there's a 10-second monitor 
notification which causes a new SS instance to be launched to replace the 
missing SS instance.

* It takes about 30 seconds for an SS instance to complete the startup 
process.  The resource agent waits for that startup to complete before 
returning to crmd.

* Until the resource agent returns, crmd does not process notifications 
for any other SS/resource.
The net effect of these delays varies from one SS instance to another.  In some 
cases, the normal failover occurs, taking 10-12 seconds.  In other cases, 
there is no failover to the other host's SS instance, and there is no 
master/active SS instance for 1-2 minutes (until an SS instance is re-launched 
following the kill), depending upon the number of disk enclosures and thus the 
number of SS instances.

My first question in this case is simply whether the serialization of 
notifications among the various SS resources is expected?  In other words, 
transition notifications for one resource are delayed until earlier 
notifications are completed.  Is this the expected behavior?  Secondly, once 
the SS instance has been restarted, there's apparently no attempt to complete 
the failover; the new SS instance assumes the active/master role

Finally, a couple of general questions:

* Is there any reason to believe that a later version of Pacemaker 
would behave differently?

*  

Re: [Pacemaker] Fw: Fw: Fw: Cluster resources failing to move

2013-03-05 Thread Jake Smith
I just wanted to reiterate one thing that I think got lost at the bottom of my 
first response - might be helpful... 

You don't need these colocation and order statements if you have the resources 
grouped. The group is a syntax shortcut for writing order and colocation 
statements so the group is enforcing an order of ip then asterisk and a 
colocation of asterisk with ip. The colocation you had originally looked like 
it was backwards too. 




Jake 
- Original Message -

From: Tommy Cooper tomcoope...@yahoo.com 
To: pacemaker@oss.clusterlabs.org 
Sent: Tuesday, March 5, 2013 9:42:52 AM 
Subject: [Pacemaker] Fw: Fw: Fw: Cluster resources failing to move 



Thank you all for helping me, the problem appears to be solved 





- Forwarded Message - 
From: emmanuel segura emi2f...@gmail.com 
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 
Sent: Tuesday, March 5, 2013 12:21 AM 
Subject: Re: [Pacemaker] Fw: Fw: Cluster resources failing to move 


try with this 


primitive p_asterisk ocf:heartbeat:asterisk \ 
meta migration-threshold=1 \ 
params user=root group=root maxfiles=65536 \ 
op start interval=0 timeout=30s \ 
op monitor interval=10s timeout=30s \ 
op stop interval=0 timeout=30s 


2013/3/5 Tommy Cooper  tomcoope...@yahoo.com  





This config did not work 


primitive p_asterisk ocf:heartbeat:asterisk \ 
params user=root group=root maxfiles=65536 \ 
meta migration-threshold=1 \ 
op start interval=0 timeout=30s \ 
op monitor interval=10s timeout=30s \ 

crm(live)configure# verify 
error: text2role: Unknown role: Start 
error: get_target_role: voip: Unknown value for target-role: Start 
error: text2role: Unknown role: Start 
error: get_target_role: voip: Unknown value for target-role: Start 
error: text2role: Unknown role: Start 
error: get_target_role: p_asterisk: Unknown value for target-role: Start 
Errors found during check: config not valid 






- Forwarded Message - 
From: emmanuel segura  emi2f...@gmail.com  
To: Tommy Cooper  tomcoope...@yahoo.com ; The Pacemaker cluster resource 
manager  pacemaker@oss.clusterlabs.org  


Sent: Monday, March 4, 2013 11:50 PM 
Subject: Re: [Pacemaker] Fw: Cluster resources failing to move 




it's should be 


primitive p_asterisk ocf:heartbeat:asterisk \ 
params user=root group=root maxfiles=65536 \ 
meta migration-threshold=1 \ 
op start interval=0 timeout=30s \ 
op monitor interval=10s timeout=30s \ 



2013/3/4 Tommy Cooper  tomcoope...@yahoo.com  

blockquote



Is this the correct way to do it? 


primitive p_asterisk ocf:heartbeat:asterisk \ 
params user=root group=root maxfiles=65536 \ 
op start interval=0 timeout=30s \ 
op monitor interval=10s timeout=30s \ 
op stop interval=0 timeout=30s migration-threshold=1 

I tried stopping the asterisk service using service asterisk stop. I repeated 
that for at least 4 times but the service keeps restarting on the same node 





- Forwarded Message - 
From: emmanuel segura  emi2f...@gmail.com  
To: Tommy Cooper  tomcoope...@yahoo.com ; The Pacemaker cluster resource 
manager  pacemaker@oss.clusterlabs.org  


Sent: Monday, March 4, 2013 11:05 PM 
Subject: Re: [Pacemaker] Cluster resources failing to move 




From Suse Docs 

7.4.2. Cleaning Up Resources ¶ 

A resource will be automatically restarted if it fails, but each failure raises 
the resource's failcount. If a migration-threshold has been set for that 
resource, the node will no longer be allowed to run the resource as soon as the 
number of failures has reached the migration threshold. 


2013/3/4 Tommy Cooper  tomcoope...@yahoo.com  

blockquote



I have removed the order and colocation statements but I am still getting the 
same results. Asterisk keeps restarting on the same server, how can I switch to 
the other server when asterisk fails? I used those statements to make sure that 
both services are running on the same server and to make sure that the virtual 
IP is started before asterisk. 






- Forwarded Message - 
From: Jake Smith  jsm...@argotec.com  
To: Tommy Cooper  tomcoope...@yahoo.com ; The Pacemaker cluster resource 
manager  pacemaker@oss.clusterlabs.org  

Sent: Monday, March 4, 2013 10:00 PM 
Subject: Re: [Pacemaker] Fw: Cluster resources failing to move 



- Original Message - 
 From: Tommy Cooper  tomcoope...@yahoo.com  
 To: pacemaker@oss.clusterlabs.org 

 Sent: Monday, March 4, 2013 3:51:03 PM 

 Subject: [Pacemaker] Fw: Cluster resources failing to move 
 
 
 
 
 Thank you for your prompt reply. I actually wanted to create an 
 active/passive cluster, so if either the network or Asterisk fails 
 these services could be migrated to the other server. As I already 
 stated earlier, the current config notifies me if asterisk is down 
 but does not start asterisk on the other server. 


Did asterisk restart on the same server? - this is what I would expect 
pacemaker to do. 

Removing the colocation (and order) 

[Pacemaker] Fw: Fw: Fw: Fw: Cluster resources failing to move

2013-03-05 Thread Tommy Cooper
Thank you for pointing that out, I found those changes to be very useful.

- Forwarded Message -
From: Jake Smith jsm...@argotec.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 
Sent: Tuesday, March 5, 2013 4:13 PM
Subject: Re: [Pacemaker] Fw: Fw: Fw: Cluster resources failing to move


I just wanted to reiterate one thing that I think got lost at the bottom of my 
first response - might be helpful...

You don't need these colocation and order statements if you have the resources 
grouped. The group is a syntax shortcut for writing order and colocation 
statements so the group is enforcing an order of ip then asterisk and a 
colocation of asterisk with ip. The colocation you had originally looked like 
it was backwards too.


Jake




From: Tommy Cooper tomcoope...@yahoo.com
To: pacemaker@oss.clusterlabs.org
Sent: Tuesday, March 5, 2013 9:42:52 AM
Subject: [Pacemaker] Fw:  Fw: Fw: Cluster resources failing to move


Thank you all for helping me, the problem appears to be solved 


- Forwarded Message -
From: emmanuel segura emi2f...@gmail.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 
Sent: Tuesday, March 5, 2013 12:21 AM
Subject: Re: [Pacemaker] Fw: Fw: Cluster resources failing to move


try with this

primitive p_asterisk ocf:heartbeat:asterisk \
meta migration-threshold=1 \
 params user=root group=root maxfiles=65536 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \
 op stop interval=0 timeout=30s


2013/3/5 Tommy Cooper tomcoope...@yahoo.com

This config did not work
 
primitive p_asterisk ocf:heartbeat:asterisk \
 params user=root group=root maxfiles=65536 \
 meta migration-threshold=1 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \

crm(live)configure# verify
   error: text2role:  Unknown role: Start
   error: get_target_role:  voip: Unknown value for target-role: Start
   error: text2role:  Unknown role: Start
   error: get_target_role:  voip: Unknown value for target-role: Start
   error: text2role:  Unknown role: Start
   error: get_target_role:  p_asterisk: Unknown value for target-role: Start
Errors found during check: config not valid



- Forwarded Message -
From: emmanuel segura emi2f...@gmail.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 

Sent: Monday, March 4, 2013 11:50 PM
Subject: Re: [Pacemaker] Fw: Cluster resources failing to move



it's should be

primitive p_asterisk ocf:heartbeat:asterisk \
 params user=root group=root maxfiles=65536 \
 meta migration-threshold=1 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \
 


2013/3/4 Tommy Cooper tomcoope...@yahoo.com

Is this the correct way to do it?
 
primitive p_asterisk ocf:heartbeat:asterisk \
 params user=root group=root maxfiles=65536 \
 op start interval=0 timeout=30s \
 op monitor interval=10s timeout=30s \
 op stop interval=0 timeout=30s migration-threshold=1
 
I tried stopping the asterisk service using service asterisk stop. I repeated 
that for at least 4 times but the service keeps restarting on the same node


- Forwarded Message -
From: emmanuel segura emi2f...@gmail.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 

Sent: Monday, March 4, 2013 11:05 PM
Subject: Re: [Pacemaker] Cluster resources failing to move


From Suse Docs


7.4.2. Cleaning Up Resources¶
A resource will be automatically restarted if it fails, but each failure 
raises the resource's failcount. If a migration-threshold has been set for 
that resource, the node will no longer be allowed to run the resource as soon 
as the number of failures has reached the migration threshold. 


2013/3/4 Tommy Cooper tomcoope...@yahoo.com

I have removed the order and colocation statements but I am still getting the 
same results. Asterisk keeps restarting on the same server, how can I switch 
to the other server when asterisk fails? I used those statements to make sure 
that both services are running on the same server and to make sure that the 
virtual IP is started before asterisk.



- Forwarded Message -
From: Jake Smith jsm...@argotec.com
To: Tommy Cooper tomcoope...@yahoo.com; The Pacemaker cluster resource 
manager pacemaker@oss.clusterlabs.org 

Sent: Monday, March 4, 2013 10:00 PM
Subject: Re: [Pacemaker] Fw: Cluster resources failing to move



- Original Message -
 From: Tommy Cooper tomcoope...@yahoo.com
 To: pacemaker@oss.clusterlabs.org

 Sent: Monday, March 4, 2013 3:51:03 PM

 Subject: [Pacemaker] Fw:  Cluster resources failing to move
 
 
 
 
 Thank you for your prompt reply. I actually wanted to create an
 active/passive cluster, so if either the network or Asterisk fails
 these services could be migrated to the other server. As I 

Re: [Pacemaker] Heartbeat Anything cmdline_options vs. Removing active resource from a group

2013-03-05 Thread Reid, Mike
  Attached is an excerpt from our two node (active/passive) Web cluster. We
  are currently launching uWSGI via the ocf:heartbeat:anything RA (see below).
 
  I would like to make some slight changes to the cmdline_options argument
  on our running cluster... (via crm configure edit)
 
  Do I need to stop the resWSGI resource first before making this type of
  change?
 
 No. The cluster will detect the change and restart the resource (and
 anything that depends on it) as necessary.
 In general, it is safe to make whatever changes are necessary - as
 long as you are confident the changes themselves are correct :)


Good to know. :) Makes sense to me, I was hoping this was the approach we could 
take. Thank you so much, Andrew.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker delays (long posting)

2013-03-05 Thread Andrew Beekhof
On Wed, Mar 6, 2013 at 2:01 AM, Michael Powell 
michael.pow...@harmonicinc.com wrote:

 I have recently assumed the responsibility for maintaining code on one of
 my company’s products that uses Pacemaker/Heartbeat.  I’m still coming up
 to speed on this code, and would like to solicit comments about some
 particular behavior.  For reference, the Pacemaker version is 1.0.9.1, and
 Heartbeat is version 3.0.3.

 ** **

 This product uses two host systems, each of which supports several disk
 enclosures, operating in an “active/passive” mode.  The two hosts are
 connected by redundant, dedicated 10Gb Ethernet links, which are used for
 messaging between them.  The disks in each enclosure are controlled by an
 instance of an application called SS.  If an “active” host’s SS application
 fails for some reason, then the corresponding application on the “passive”
 host will assume control of the disks.  Each application is assigned a
 Pacemaker resource, and the resource agent communicates with the
 appropriate SS instance.  For reference, here’s a sample crm_mon output:**
 **

 ** **

 

 Last updated: Tue Mar  5 06:10:22 2013

 Stack: Heartbeat

 Current DC: mgraid-12241530rn01433-0
 (f4e5e15c-d06b-4e37-89b9-4621af05128f) - partition with quorum

 Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677

 2 Nodes configured, unknown expected votes

 9 Resources configured.

 

 ** **

 Online: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]

 ** **

 Clone Set: Fencing

  Started: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]

 Clone Set: cloneIcms

  Started: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]

 Clone Set: cloneOmserver

  Started: [ mgraid-12241530rn01433-0 mgraid-12241530rn01433-1 ]

 Master/Slave Set: ms-SS11451532RN01389

  Masters: [ mgraid-12241530rn01433-1 ]

  Slaves: [ mgraid-12241530rn01433-0 ]

 Master/Slave Set: ms-SS11481532RN01465

  Masters: [ mgraid-12241530rn01433-0 ]

  Slaves: [ mgraid-12241530rn01433-1 ]

 Master/Slave Set: ms-SS12171532RN01613

  Masters: [ mgraid-12241530rn01433-0 ]

  Slaves: [ mgraid-12241530rn01433-1 ]

 Master/Slave Set: ms-SS12241530RN01433

  Masters: [ mgraid-12241530rn01433-0 ]

  Slaves: [ mgraid-12241530rn01433-1 ]

 Master/Slave Set: ms-SS12391532RN01768

  Masters: [ mgraid-12241530rn01433-0 ]

  Slaves: [ mgraid-12241530rn01433-1 ]

 Master/Slave Set: ms-SS12391532RN01772

  Masters: [ mgraid-12241530rn01433-0 ]

  Slaves: [ mgraid-12241530rn01433-1 ]

 ** **

 I’ve been investigating the system’s behavior when one or more master SS
 instances crashes, simulated by a kill command.  I’ve noticed two
 behaviors of interest.

 ** **

 First, in a simple case, where one master SS is killed, it takes about
 10-12 seconds for the slave to complete the failover.  From the log files,
 the DC issues the following notifications to the slave SS:

 **· **Pre_notify_demote

 **· **Post_notify_demote

 **· **Pre_notify_stop

 **· **Post_notify_stop

 **· **Pre_notify_promote

 **· **Promote

 **· **Post_notify_promote

 **· **Monitor_3000

 **· **Pre_notify_start

 **· **Post_notify_start

 ** **

 These notifications and their confirmations appear to take about 1-2
 seconds each, begging the following questions:

 **· **Is this sequence of notifications expected?


Yes, it looks correct (if sub-optimal) to me.
A more recent version might provide a better experience.


 

 **· **Is the 10-12 second timeframe expected?


Its really dependant on what the RA (resource agent) does with the
notification (and therefor how long it takes).
Do you need the notifications turned on?  Some agents like drbd do need it,
but without knowing which agents you're using its hard to say.


 

 ** **

 Second, in a more complex case, where the master SS for each instance is
 assigned to the same can, and each SS is in turn killed with an
 approximate 10-second delay between kill commands, there appear to be
 very long delays in processing the notifications.  These delays appear to
 be associated with these factors

 **· **After an SS instance is killed, there’s a 10-second monitor
 notification which causes a new SS instance to be launched to replace the
 missing SS instance.


Whoa... monitor restarts the service if it detects a failure?
That is rarely a good idea.

 

 **· **It takes about 30 seconds for an SS instance to complete
 the startup process.  The resource agent waits for that startup to complete
 before returning to crmd.


Right, agents shouldn't say done until they really are.
Returning too soon usually just leads to people needing to insert
delays/sleeps 

[Pacemaker] [Problem][crmsh]The designation of the 'ordered' attribute becomes the error.

2013-03-05 Thread renayama19661014
Hi Dejan,
Hi Andrew,

As for the crm shell, the check of the meta attribute was revised with the next 
patch.

 * http://hg.savannah.gnu.org/hgweb/crmsh/rev/d1174f42f4b3

This patch was backported in Pacemaker1.0.13.

 * 
https://github.com/ClusterLabs/pacemaker-1.0/commit/fa1a99ab36e0ed015f1bcbbb28f7db962a9d1abc#shell/modules/cibconfig.py

However, the ordered,colocated attribute of the group resource is treated as an 
error when I use crm Shell which adopted this patch.

--
(snip)
### Group Configuration ###
group master-group \
vip-master \
vip-rep \
meta \
ordered=false
(snip)

[root@rh63-heartbeat1 ~]# crm configure load update test2339.crm 
INFO: building help index
crm_verify[20028]: 2013/03/06_17:57:18 WARN: unpack_nodes: Blind faith: not 
fencing unseen nodes
WARNING: vip-master: specified timeout 60s for start is smaller than the 
advised 90
WARNING: vip-master: specified timeout 60s for stop is smaller than the advised 
100
WARNING: vip-rep: specified timeout 60s for start is smaller than the advised 90
WARNING: vip-rep: specified timeout 60s for stop is smaller than the advised 100
ERROR: master-group: attribute ordered does not exist  - WHY?
Do you still want to commit? y
--

If it chooses `yes` by a confirmation message, it is reflected, but it is a 
problem that error message is displayed.
 * The error occurs in the same way when I appoint colocated attribute.
AndI noticed that there was not explanation of ordered,colocated of the 
group resource in online help of Pacemaker.

I think that the designation of the ordered,colocated attribute should not 
become the error in group resource.
In addition, I think that ordered,colocated should be added to online help.

Best Regards,
Hideo Yamauchi.


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is there an 'OR' type of constraint?

2013-03-05 Thread Andrew Beekhof
For ordering: yes, use resource sets with require-all=false
For colocation: no, its a much harder problem unfortunately

On Tue, Mar 5, 2013 at 6:51 AM, Doug Clow doug.c...@dashbox.com wrote:
 Hello,

 I have a Filesystem resource that is an NFS mount.  This NFS resource is 
 available at two separate IPs on two separate servers.  Is it possible to 
 create two Filesystem resources with this mount but point them to different 
 ips and then have only one active at a time, but have a single active mount 
 satisfy the later constraints?  Something like an OR link between them.  That 
 way the NFS mount point would have failover.

 Best Regards,
 Doug
 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] [Problem][crmsh]The designation of the 'ordered' attribute becomes the error.

2013-03-05 Thread Andrew Beekhof
On Wed, Mar 6, 2013 at 12:37 PM,  renayama19661...@ybb.ne.jp wrote:
 Hi Dejan,
 Hi Andrew,

 As for the crm shell, the check of the meta attribute was revised with the 
 next patch.

  * http://hg.savannah.gnu.org/hgweb/crmsh/rev/d1174f42f4b3

 This patch was backported in Pacemaker1.0.13.

  * 
 https://github.com/ClusterLabs/pacemaker-1.0/commit/fa1a99ab36e0ed015f1bcbbb28f7db962a9d1abc#shell/modules/cibconfig.py

 However, the ordered,colocated attribute of the group resource is treated as 
 an error when I use crm Shell which adopted this patch.

 --
 (snip)
 ### Group Configuration ###
 group master-group \
 vip-master \
 vip-rep \
 meta \
 ordered=false
 (snip)

 [root@rh63-heartbeat1 ~]# crm configure load update test2339.crm
 INFO: building help index
 crm_verify[20028]: 2013/03/06_17:57:18 WARN: unpack_nodes: Blind faith: not 
 fencing unseen nodes
 WARNING: vip-master: specified timeout 60s for start is smaller than the 
 advised 90
 WARNING: vip-master: specified timeout 60s for stop is smaller than the 
 advised 100
 WARNING: vip-rep: specified timeout 60s for start is smaller than the advised 
 90
 WARNING: vip-rep: specified timeout 60s for stop is smaller than the advised 
 100
 ERROR: master-group: attribute ordered does not exist  - WHY?
 Do you still want to commit? y
 --

 If it chooses `yes` by a confirmation message, it is reflected, but it is a 
 problem that error message is displayed.
  * The error occurs in the same way when I appoint colocated attribute.
 AndI noticed that there was not explanation of ordered,colocated of the 
 group resource in online help of Pacemaker.

Because we don't want anyone to use it and this is the first step
towards its removal.
We have resource sets that negate the need for these kinds of groups.


 I think that the designation of the ordered,colocated attribute should not 
 become the error in group resource.

I don't know what xml crmsh is generating, but I can't see anything in
Pacemaker that would prevent it from being set.
Its not covered by the schema and group_unpack() handles it just fine.

 In addition, I think that ordered,colocated should be added to online help.

 Best Regards,
 Hideo Yamauchi.


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] standby attribute and same resources running at the same time

2013-03-05 Thread Andrew Beekhof
On Tue, Mar 5, 2013 at 4:20 AM, Leon Fauster leonfaus...@googlemail.com wrote:
 Dear list,

 just to excuse the triviality - i started to deploy a ha environment
 in a test lab and therefore i do not have much experience.



 i started to setup a 2-node cluster

   corosync-1.4.1-15.el6.x86_64
   pacemaker-1.1.8-7.el6.x86_64
   cman-3.0.12.1-49.el6.x86_64

 with rhel6.3 and then switched to rhel6.4.

 This update brings some differences. The crm shell is gone and pcs is added.
 Anyway i found some equivalent commands to setup/configure resources.

 So far all good. I am doing some stress test now and noticed that rebooting
 one node (n2), that node (n2) will be marked as standby in the cib (shown on 
 the
 other node (n1)).

 After rebooting the node (n2) crm_mon on that node shows that the other node 
 (n1)
 is offline and begins to start the ressources. While the other node (n1) that 
 wasn't
 rebooted still shows n2 as standby. At that point both nodes are runnnig the 
 same
 resources. After a couple of minutes that situation is noticed and both nodes
 renegotiate the current state. Then one node take over the responsibility to 
 provide
 the resources. On both nodes the previously rebooted node is still listed as 
 standby.


   cat /var/log/messages |grep error
   Mar  4 17:32:33 cn1 pengine[1378]:error: native_create_actions: 
 Resource resIP (ocf::IPaddr2) is active on 2 nodes attempting recovery
   Mar  4 17:32:33 cn1 pengine[1378]:error: native_create_actions: 
 Resource resApache (ocf::apache) is active on 2 nodes attempting recovery
   Mar  4 17:32:33 cn1 pengine[1378]:error: process_pe_message: Calculated 
 Transition 1: /var/lib/pacemaker/pengine/pe-error-6.bz2
   Mar  4 17:32:48 cn1 crmd[1379]:   notice: run_graph: Transition 1 
 (Complete=9, Pending=0, Fired=0, Skipped=0, Incomplete=0, 
 Source=/var/lib/pacemaker/pengine/pe-error-6.bz2): Complete


   crm_mon -1
   Last updated: Mon Mar  4 17:49:08 2013
   Last change: Mon Mar  4 10:22:53 2013 via crm_resource on cn1.localdomain
   Stack: cman
   Current DC: cn1.localdomain - partition with quorum
   Version: 1.1.8-7.el6-394e906
   2 Nodes configured, 2 expected votes
   2 Resources configured.

   Node cn2.localdomain: standby
   Online: [ cn1.localdomain ]

   resIP (ocf::heartbeat:IPaddr2):   Started cn1.localdomain
   resApache (ocf::heartbeat:apache):Started cn1.localdomain


 i checked the init scripts and found that the standby behavior comes
 from a function that is called on service pacemaker stop (added in rhel6.4).

 cman_pre_stop()
 {
 cname=`crm_node --name`
 crm_attribute -N $cname -n standby -v true -l reboot
 echo -n Waiting for shutdown of managed resources
 ...

That will only last until the node comes back (the cluster will remove
it automatically), the core problem is that it appears not to have.
Can you file a bug and attach a crm_report for the period covered by
the restart?


 i could not delete the standby attribute with

 crm_attribute -G --node=cn2.localdomain -n standby



 okay - recap:

 1st. i have this delay where the two nodes dont see each
 other (after rebooting) and the result are resources running on both
 nodes while they should only run on one node - this will be corrected
 by the cluster it self but this situation should not happen.

 2nd. the standby attribute (and there must be a reason why redhat
 added this) will prevent to migrate resources to that node. How
 do i delete this attribute?

 i appreciate any comments

 --
 Leon



 A. $ cat /etc/cluster/cluster.conf
 ?xml version=1.0?
  cluster name=HA config_version=5
logging debug=off/
clusternodes
  clusternode name=cn1.localdomain votes=1 nodeid=1
fence
  method name=pcmk-redirect
device name=pcmk port=cn1.localdomain/
  /method
/fence
  /clusternode
  clusternode name=cn2.localdomain votes=1 nodeid=2
fence
  method name=pcmk-redirect
device name=pcmk port=cn2.localdomain/
  /method
/fence
  /clusternode
/clusternodes
fencedevices
  fencedevice name=pcmk agent=fence_pcmk/
/fencedevices
rm
  failoverdomains/
  resources/
/rm
  /cluster


 B. $ pcs config
 Corosync Nodes:

 Pacemaker Nodes:
  cn1.localdomain cn2.localdomain

 Resources:
  Resource: resIP (provider=heartbeat type=IPaddr2 class=ocf)
   Attributes: ip=192.168.201.220 nic=eth0 cidr_netmask=24
   Operations: monitor interval=30s
  Resource: resApache (provider=heartbeat type=apache class=ocf)
   Attributes: httpd=/usr/sbin/httpd configfile=/etc/httpd/conf/httpd.conf
   Operations: monitor interval=1min

 Location Constraints:
 Ordering Constraints:
   start resApache then start resIP
 Colocation Constraints:
   resIP with resApache

 Cluster Properties:
  dc-version: 1.1.8-7.el6-394e906
  cluster-infrastructure: cman
  expected-quorum-votes: 2
  stonith-enabled: false
  no-quorum-policy: ignore












Re: [Pacemaker] [RFC] Automatic nodelist synchronization between corosync and pacemaker

2013-03-05 Thread Andrew Beekhof
On Thu, Feb 28, 2013 at 5:13 PM, Vladislav Bogdanov
bub...@hoster-ok.com wrote:
 28.02.2013 07:21, Andrew Beekhof wrote:
 On Tue, Feb 26, 2013 at 7:36 PM, Vladislav Bogdanov
 bub...@hoster-ok.com wrote:
 26.02.2013 11:10, Andrew Beekhof wrote:
 On Mon, Feb 18, 2013 at 6:18 PM, Vladislav Bogdanov
 bub...@hoster-ok.com wrote:
 Hi Andrew, all,

 I had an idea last night, that it may be worth implementing
 fully-dynamic cluster resize support in pacemaker,

 We already support nodes being added on the fly.  As soon as they show
 up in the membership we add them to the cib.

 Membership (runtime.totem.pg.mrp.srp.members) or nodelist (nodelist.node)?

 To my knowledge, only one (first) gets updated at runtime.
 Even if nodelist.node could be updated dynamically, we'd have to poll
 or be prompted to find out.

 It can, please see at the end of cmap_keys(8).
 Please also see cmap_track_add(3) for CMAP_TRACK_PREFIX flag (and my
 original message ;) ).

ACK :)



 I recall that when I migrated from corosync 1.4 to 2.0 (somewhere near
 pacemaker 1.1.8 release time) and replaced old-style UDPU member list
 with nodelist.node, I saw all nodes configured in that nodelist appeared
 in a CIB. For me that was a regression, because with old-style config
 (and corosync 1.4) CIB contained only nodes seen online (4 of 16).

 That was a loophole that only worked when the entire cluster had been
 down and the nodes section was empty.

 Aha, that is what I've been hit by.

 People filed bugs explicitly asking for that loophole to be closed
 because it was inconsistent with what the cluster did on every
 subsequent startup.

 That is what I'm interested too. And what I propose should fix that too.

Ah, I must have misparsed, I thought you were looking for the opposite
behaviour.

So basically, you want to be able to add/remove nodes from nodelist.*
in corosync.conf and have pacemaker automatically add/remove them from
itself?

If corosync.conf gets out of sync (admin error or maybe a node was
down when you updated last) they might well get added back - I assume
you're ok with that?
Because there's no real way to know the difference between added
back and not removed from last time.

Or are you planning to never update the on-disk corosync.conf and only
modify the in-memory nodelist?



 That
 would be OK if number of clone instances does not raise with that...

 Why?  If clone-node-max=1, then you'll never have more than the number
 of active nodes - even if clone-max is greater.

 Active (online) or known (existing in a nodes section)?
 I've seen that as soon as node appears in nodes even in offline state,
 new clone instance is allocated.

$num_known instances will exist, but only $num_active will be running.


 Also, on one cluster with post-1.1.7 with openais plugin I have 16 nodes
 configured in totem.interface.members, but only three nodes in nodes
 CIB section, And I'm able to allocate at least 8-9 instances of clones
 with clone-max.

Yes, but did you set clone-node-max?  One is the global maximum, the
other is the per-node maximum.

 I believe that pacemaker does not query
 totem.interface.members directly with openais plugin,

Correct.

 and
 runtime.totem.pg.mrp.srp.members has only three nodes.
 Did that behavior change recently?

No.





 For node removal we do require crm_node --remove.

 Is this not sufficient?

 I think it would be more straight-forward if there is only one origin of
 membership information for entire cluster stack, so proposal is to
 automatically remove node from CIB when it disappears from corosync
 nodelist (due to removal by admin). That nodelist is not dynamic (read
 from a config and then may be altered with cmapctl).

 Ok, but there still needs to be a trigger.
 Otherwise we waste cycles continuously polling corosync for something
 that is probably never going to happen.

 Please see above (cmap_track_add).


 Btw. crm_node doesn't just remove the node from the cib, its existence
 is preserved in a number of caches which need to be purged.

 That could be done in a cmap_track_add's callback function too I think.

 It could be possible to have crm_node also use the CMAP API to remove
 it from the running corosync, but something would still need to edit
 corosync.conf

 Yes, that is to admin.
 Btw I think more about scenario Fabio explains in votequorum(8) in
 'allow_downscale' section - that is the one I'm interested in.


 IIRC, pcs handles all three components (corosync.conf, CMAP, crm_node)
 as well as the add case.

 Good to know. But, I'm not ready yet to switch to it.


 Of course, it is possible to use crm_node to remove node from CIB too
 after it disappeared from corosync, but that is not as elegant as
 automatic one IMHO. And, that should not be very difficult to implement.


 utilizing
 possibilities CMAP and votequorum provide.

 Idea is to:
 * Do not add nodes from nodelist to CIB if their join-count in cmap is
 zero (but do not touch CIB nodes which exist in a nodelist and 

Re: [Pacemaker] Block stonith when drbd inconsistent

2013-03-05 Thread Andrew Beekhof
Nodes shouldn't be being fenced so often.  Do you know what is causing
this to happen?
You can also set resource-stickiness to prevent the resources
migrating back to A when it first comes back.

On Sun, Feb 24, 2013 at 9:11 PM, Jan Škoda le...@multihost.cz wrote:
 Hello,

 I'm searching for a way to block stonith until drbd peers are synchronized.

 Otherwise when server A is stonithed, then comes up and resources
 migrate back to A, server B can be stonithed as well, which results in
 drbd array inconsistent until B comes up as well and synchronization can
 finish.

 I would prefer stonith waiting for synchronization to finish and then
 (in my scenario) kill B and start resources on A. Is it possible somehow?

 Thanks!
 --
 Honza 'Lefty' Škoda http://www.jskoda.cz/


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] The correction request of the log of booth

2013-03-05 Thread yusuke iida
Hi, Jiaju

There is a request about the log of booth.

I want you to change a log level when a ticket expires into info from debug.

I think that this log is important since it means what occurred.

And I want you to add the following information to log.
 * Which ticket is it?
 * Who had a ticket?

For example, I want you to use the following forms.
info: lease expires ... owner [0] ticket [ticketA]

diff --git a/src/paxos_lease.c b/src/paxos_lease.c
index 74b41b1..8681ecd 100644
--- a/src/paxos_lease.c
+++ b/src/paxos_lease.c
@@ -153,7 +153,8 @@ static void lease_expires(unsigned long data)
pl_handle_t plh = (pl_handle_t)pl;
struct paxos_lease_result plr;

-   log_debug(lease expires ...);
+   log_info(lease expires ... owner [%d] ticket [%s],
+   pl-owner, pl-name);
pl-owner = -1;
strcpy(plr.name, pl-name);
plr.owner = -1;


Regards,
Yusuke

--

METRO SYSTEMS CO., LTD

Yusuke Iida
Mail: yusk.i...@gmail.com


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker resource migration behaviour

2013-03-05 Thread Andrew Beekhof
On Tue, Feb 5, 2013 at 9:13 PM, James Guthrie j...@open.ch wrote:
 Hi Andrew,

 The resource in this case was master-squid.init. The resource agent serves 
 as a master/slave OCF wrapper to a non-LSB init script. I forced the failure 
 by manually stopping that init script on the host.

Ok.

Generally init scripts aren't suitable to be used as a master/slave
resource - even when wrapped in an OCF script.
What do you do for promote/demote?

Beyond that, are you saying that resources other than
master-squid.init were stopped?  That sounds very bad.


 Regards,
 James
 On Feb 5, 2013, at 10:56 AM, Andrew Beekhof and...@beekhof.net wrote:

 On Thu, Jan 31, 2013 at 3:04 AM, James Guthrie j...@open.ch wrote:
 Hi all,

 I'm having a bit of difficulty with the way that my cluster is behaving on 
 failure of a resource.

 The objective of my clustering setup is to provide a virtual IP, to which a 
 number of other services are bound. The services are bound to the VIP with 
 constraints to force the service to be running on the same host as the VIP.

 I have been testing the way that the cluster behaves if it is unable to 
 start a resource. What I observe is the following: the cluster tries to 
 start the resource on node 1,

 Can you define the resource?  You have a few and it matters :)

 fails 10 times, reaches the migration threshold, moves the resource to the 
 other host, fails 10 times, reaches the migration threshold. Now it has 
 reached the migration threshold on all possible hosts. I was then expecting 
 that it would stop the resource on all nodes and run all of the other 
 resources as though nothing were wrong. What I see though is that the 
 cluster demotes all master/slave resources, despite the fact that only one 
 of them is failing.

 I wasn't able to find a parameter which would dictate what the behaviour 
 should be if the migration failed on all available hosts. I must therefore 
 believe that the constraints configuration I'm using isn't doing quite what 
 I hope it's doing.

 Below is the configuration xml I am using on the hosts (no crmsh config, 
 sorry).

 I am using Corosync 2.3.0 and Pacemaker 1.1.8, built from source.

 Regards,
 James

 !-- Configuration file for pacemaker --
 resources
  !--resource for conntrackd--
  master id=master-conntrackd
meta_attributes id=master-conntrackd-meta_attributes
  nvpair id=master-conntrackd-meta_attributes-notify name=notify 
 value=true/
  nvpair id=master-conntrackd-meta_attributes-interleave 
 name=interleave value=true/
  nvpair id=master-conntrackd-meta_attributes-target-role 
 name=target-role value=Master/
  nvpair id=master-conndtrakd-meta_attributes-failure-timeout 
 name=failure-timeout value=600/
  nvpair id=master-conntrackd-meta_attributes-migration-threshold 
 name=migration-threshold value=10/
/meta_attributes
primitive id=conntrackd class=ocf provider=OSAG type=conntrackd
  operations
op id=conntrackd-slave-check name=monitor interval=60 
 role=Slave /
op id=conntrackd-master-check name=monitor interval=61 
 role=Master /
  /operations
/primitive
  /master
  master id=master-condition
meta_attributes id=master-condition-meta_attributes
  nvpair id=master-condition-meta_attributes-notify name=notify 
 value=false/
  nvpair id=master-condition-meta_attributes-interleave 
 name=interleave value=true/
  nvpair id=master-condition-meta_attributes-target-role 
 name=target-role value=Master/
  nvpair id=master-condition-meta_attributes-failure-timeout 
 name=failure-timeout value=600/
  nvpair id=master-condition-meta_attributes-migration-threshold 
 name=migration-threshold value=10/
/meta_attributes
primitive id=condition class=ocf provider=OSAG type=condition
  instance_attributes id=condition-attrs
  /instance_attributes
  operations
op id=condition-slave-check name=monitor interval=10 
 role=Slave /
op id=condition-master-check name=monitor interval=11 
 role=Master /
  /operations
/primitive
  /master
  master id=master-ospfd.init
meta_attributes id=master-ospfd-meta_attributes
  nvpair id=master-ospfd-meta_attributes-notify name=notify 
 value=false/
  nvpair id=master-ospfd-meta_attributes-interleave name=interleave 
 value=true/
  nvpair id=master-ospfd-meta_attributes-target-role 
 name=target-role value=Master/
  nvpair id=master-ospfd-meta_attributes-failure-timeout 
 name=failure-timeout value=600/
  nvpair id=master-ospfd-meta_attributes-migration-threshold 
 name=migration-threshold value=10/
/meta_attributes
primitive id=ospfd class=ocf provider=OSAG type=osaginit
  instance_attributes id=ospfd-attrs
nvpair id=ospfd-script name=script value=ospfd.init/
  /instance_attributes
  operations
op id=ospfd-slave-check name=monitor interval=10 role=Slave 
 /
op id=ospfd-master-check name=monitor interval=11 
 role=Master /
  

Re: [Pacemaker] Pacemaker resource migration behaviour

2013-03-05 Thread Andrew Beekhof
Unfortunately the config only tells half of the story, the really
important parts are in the status.
Do you still happen to have
/opt/OSAGpcmk/pcmk/var/lib/pacemaker/pengine/pe-input-156.bz2 on mu
around?  That would have what we need.

On Wed, Feb 6, 2013 at 1:12 AM, James Guthrie j...@open.ch wrote:
 Hi all,

 as a follow-up to this, I realised that I needed to slightly change the way 
 the resource constraints are put together, but I'm still seeing the same 
 behaviour.

 Below are an excerpt from the logs on the host and the revised xml 
 configuration. In this case, I caused two failures on the host mu, which 
 forced the resources onto nu then I forced two failures on nu. What can be 
 seen in the logs are the two detected failures on nu (the warning: 
 update_failcount: lines). After the two failures on nu, the VIP is migrated 
 back to mu, but none of the support resources are promoted with it.

 Regards,
 James

 1cFeb  5 14:58:45 mu crmd[31482]:  warning: update_failcount: Updating 
 failcount for sub-squid on nu after failed monitor: rc=9 (update=value++, 
 time=1360072725)
 1dFeb  5 14:58:45 mu crmd[31482]:   notice: do_state_transition: State 
 transition S_IDLE - S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL 
 origin=abort_transition_graph ]
 1dFeb  5 14:58:45 mu pengine[31481]:   notice: unpack_config: On loss of 
 CCM Quorum: Ignore
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: unpack_rsc_op: Processing 
 failed op monitor for sub-squid:0 on mu: master (failed) (9)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: unpack_rsc_op: Processing 
 failed op monitor for sub-squid:0 on nu: master (failed) (9)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1dFeb  5 14:58:45 mu pengine[31481]:   notice: LogActions: Recover 
 sub-squid:0(Master nu)
 1dFeb  5 14:58:45 mu pengine[31481]:   notice: process_pe_message: 
 Calculated Transition 64: 
 /opt/OSAGpcmk/pcmk/var/lib/pacemaker/pengine/pe-input-152.bz2
 1dFeb  5 14:58:45 mu pengine[31481]:   notice: unpack_config: On loss of 
 CCM Quorum: Ignore
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: unpack_rsc_op: Processing 
 failed op monitor for sub-squid:0 on mu: master (failed) (9)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: unpack_rsc_op: Processing 
 failed op monitor for sub-squid:0 on nu: master (failed) (9)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:58:45 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1dFeb  5 14:58:45 mu pengine[31481]:   notice: LogActions: Recover 
 sub-squid:0(Master nu)
 1dFeb  5 14:58:45 mu pengine[31481]:   notice: process_pe_message: 
 Calculated Transition 65: 
 /opt/OSAGpcmk/pcmk/var/lib/pacemaker/pengine/pe-input-153.bz2
 1dFeb  5 14:58:48 mu crmd[31482]:   notice: run_graph: Transition 65 
 (Complete=14, Pending=0, Fired=0, Skipped=0, Incomplete=0, 
 Source=/opt/OSAGpcmk/pcmk/var/lib/pacemaker/pengine/pe-input-153.bz2): 
 Complete
 1dFeb  5 14:58:48 mu crmd[31482]:   notice: do_state_transition: State 
 transition S_TRANSITION_ENGINE - S_IDLE [ input=I_TE_SUCCESS 
 cause=C_FSA_INTERNAL origin=notify_crmd ]
 1dFeb  5 14:58:58 mu conntrack-tools[1677]: flushing kernel conntrack table 
 (scheduled)
 1cFeb  5 14:59:10 mu crmd[31482]:  warning: update_failcount: Updating 
 failcount for sub-squid on nu after failed monitor: rc=9 (update=value++, 
 time=1360072750)
 1dFeb  5 14:59:10 mu crmd[31482]:   notice: do_state_transition: State 
 transition S_IDLE - S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL 
 origin=abort_transition_graph ]
 1dFeb  5 14:59:10 mu pengine[31481]:   notice: unpack_config: On loss of 
 CCM Quorum: Ignore
 1cFeb  5 14:59:10 mu pengine[31481]:  warning: unpack_rsc_op: Processing 
 failed op monitor for sub-squid:0 on mu: master (failed) (9)
 1cFeb  5 14:59:10 mu pengine[31481]:  warning: unpack_rsc_op: Processing 
 failed op monitor for sub-squid:0 on nu: master (failed) (9)
 1cFeb  5 14:59:10 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:59:10 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:59:10 mu pengine[31481]:  warning: common_apply_stickiness: 
 Forcing master-squid away from mu after 2 failures (max=2)
 1cFeb  5 14:59:10 mu pengine[31481]:  

Re: [Pacemaker] Pacemaker resource migration behaviour

2013-03-05 Thread Andrew Beekhof
On Wed, Feb 6, 2013 at 11:41 PM, James Guthrie j...@open.ch wrote:
 Hi David,

 Unfortunately crm_report doesn't work correctly on my hosts as we have 
 compiled from source with custom paths and apparently the crm_report and 
 associated tools are not built to use the paths that can be customised with 
 autoconf.

It certainly tries to:

   https://github.com/beekhof/pacemaker/blob/master/tools/report.common#L99

What does it say on your system (or, what paths did you give to autoconf)?


 Despite that, I have done some investigation and think I may have found an 
 inconsistency. I have attached the pacemaker-relevant syslog, including the 
 pe-input files.

Great, I'll take a look now.

 The logfile starts where pacemaker detects that sub-squid is not running on 
 mu. It then fails over to nu, where two further failures take place. In order 
 to recover from these failures, the pengine produces transitions 106, 107, 
 108 and 109, with the corresponding pe-input files 46, 47, 48 and 49.

 The way I understand it, pacemaker works through the transitions until 
 something happens from outside, at which point the transitions are 
 recalculated and pacemaker continues on.

 Using crm_simulate to observe the transitions that should happen tells me 
 that the transitions that were calculated from pe-input-49 ought to have 
 resulted in the resources conntrackd, condition, sub-ospfd, sub-ripd and 
 sub-squid being promote to master. In fact, this never happens, but the crmd 
 reports the transition as being complete. It appears as though nowhere is it 
 acknowledged that the current state is not the desired outcome as calculated 
 by the pengine. Is it possible that this is a bug?

 Regards,
 James



 On Feb 5, 2013, at 7:41 PM, David Vossel dvos...@redhat.com wrote:



 - Original Message -
 From: James Guthrie j...@open.ch
 To: The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
 Sent: Tuesday, February 5, 2013 8:12:57 AM
 Subject: Re: [Pacemaker] Pacemaker resource migration behaviour

 Hi all,

 as a follow-up to this, I realised that I needed to slightly change
 the way the resource constraints are put together, but I'm still
 seeing the same behaviour.


 Below are an excerpt from the logs on the host and the revised xml
 configuration. In this case, I caused two failures on the host mu,
 which forced the resources onto nu then I forced two failures on nu.
 What can be seen in the logs are the two detected failures on nu
 (the warning: update_failcount: lines). After the two failures on
 nu, the VIP is migrated back to mu, but none of the support
 resources are promoted with it.

 I can't tell much from this output.

 Run the steps you use to reproduce this and create a crm_report of the issue 
 so we can see both the logs and pengine transition files that proceed this.

 -- Vossel


 Regards,
 James


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker resource migration behaviour

2013-03-05 Thread Andrew Beekhof
Evidently this is something that has since been fixed.

In your logs pe-input-47 results in:

1dFeb  6 09:37:52 mu pengine[6257]:   notice: LogActions: Demote
conntrackd:1(Master - Slave nu)\
1dFeb  6 09:37:52 mu pengine[6257]:   notice: LogActions: Demote
condition:1 (Master - Slave nu)\
1dFeb  6 09:37:52 mu pengine[6257]:   notice: LogActions: Demote
sub-ospfd:1 (Master - Slave nu)\
1dFeb  6 09:37:52 mu pengine[6257]:   notice: LogActions: Demote
sub-ripd:1  (Master - Slave nu)\
1dFeb  6 09:37:52 mu pengine[6257]:   notice: LogActions: Demote
sub-squid:0 (Master - Stopped nu)\
1dFeb  6 09:37:52 mu pengine[6257]:   notice: LogActions: Move
eth1-0-192.168.1.10 (Started nu - mu)\
1dFeb  6 09:37:52 mu pengine[6257]:   notice: process_pe_message:
Calculated Transition 107:
/opt/OSAGpcmk/pcmk/var/lib/pacemaker/pengine/pe-input-47.bz2\

Testing with the latest code shows:

Transition Summary:
 * Promote conntrackd:0 (Slave - Master mu)
 * Demote  conntrackd:1 (Master - Slave nu)
 * Promote condition:0  (Slave - Master mu)
 * Demote  condition:1  (Master - Slave nu)
 * Promote sub-ospfd:0  (Slave - Master mu)
 * Demote  sub-ospfd:1  (Master - Slave nu)
 * Promote sub-ripd:0   (Slave - Master mu)
 * Demote  sub-ripd:1   (Master - Slave nu)
 * Demote  sub-squid:0  (Master - Slave nu)
 * Start   sub-squid:1  (mu)
 * Promote sub-squid:1  (Stopped - Master mu)
 * Moveeth1-0-192.168.1.10  (Started nu - mu)

Which looks more like what you're after.

I'm still very confused about why you're using master/slave though.

On Wed, Feb 6, 2013 at 11:41 PM, James Guthrie j...@open.ch wrote:
 Hi David,

 Unfortunately crm_report doesn't work correctly on my hosts as we have 
 compiled from source with custom paths and apparently the crm_report and 
 associated tools are not built to use the paths that can be customised with 
 autoconf.

 Despite that, I have done some investigation and think I may have found an 
 inconsistency. I have attached the pacemaker-relevant syslog, including the 
 pe-input files. The logfile starts where pacemaker detects that sub-squid is 
 not running on mu. It then fails over to nu, where two further failures take 
 place. In order to recover from these failures, the pengine produces 
 transitions 106, 107, 108 and 109, with the corresponding pe-input files 46, 
 47, 48 and 49.

 The way I understand it, pacemaker works through the transitions until 
 something happens from outside, at which point the transitions are 
 recalculated and pacemaker continues on.

 Using crm_simulate to observe the transitions that should happen tells me 
 that the transitions that were calculated from pe-input-49 ought to have 
 resulted in the resources conntrackd, condition, sub-ospfd, sub-ripd and 
 sub-squid being promote to master. In fact, this never happens, but the crmd 
 reports the transition as being complete. It appears as though nowhere is it 
 acknowledged that the current state is not the desired outcome as calculated 
 by the pengine. Is it possible that this is a bug?

Not really, it means something* happened that we didn't expect.
Pacemaker stops the current transition** and automatically asks the
pengine for another set of calculations.


* sub-squid failing by the looks of it
1cFeb  6 09:37:52 mu crmd[6258]:  warning: update_failcount:
Updating failcount for sub-squid on nu after failed monitor: rc=9
(update=value++, time=1360139872)\

** Thats what this line is, notice the Skipped=15:

1dFeb  6 09:37:52 mu crmd[6258]:   notice: run_graph: Transition 107
(Complete=21, Pending=0, Fired=0, Skipped=15, Incomplete=6,
Source=/opt/OSAGpcmk/pcmk/var/lib/pacemaker/pengine/pe-input-47.bz2):
Stopped\


 Regards,
 James



 On Feb 5, 2013, at 7:41 PM, David Vossel dvos...@redhat.com wrote:



 - Original Message -
 From: James Guthrie j...@open.ch
 To: The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
 Sent: Tuesday, February 5, 2013 8:12:57 AM
 Subject: Re: [Pacemaker] Pacemaker resource migration behaviour

 Hi all,

 as a follow-up to this, I realised that I needed to slightly change
 the way the resource constraints are put together, but I'm still
 seeing the same behaviour.


 Below are an excerpt from the logs on the host and the revised xml
 configuration. In this case, I caused two failures on the host mu,
 which forced the resources onto nu then I forced two failures on nu.
 What can be seen in the logs are the two detected failures on nu
 (the warning: update_failcount: lines). After the two failures on
 nu, the VIP is migrated back to mu, but none of the support
 resources are promoted with it.

 I can't tell much from this output.

 Run the steps you use to reproduce this and create a crm_report of the issue 
 so we can see both the logs and pengine transition files that proceed this.

 -- Vossel


 Regards,
 James


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org