Re: [ClusterLabs] CIB: op-status=4 ?

2017-05-23 Thread Radoslaw Garbacz
Thanks, your explanation is very helpful considering that it happens rarely
and only on the first boot after VMs are created.

On Mon, May 22, 2017 at 9:34 PM, Ken Gaillot  wrote:

> On 05/19/2017 02:03 PM, Radoslaw Garbacz wrote:
> > Hi,
> >
> > I have some more information regarding this issue (pacemaker debug logs).
> >
> > Firstly, I have not mentioned probably important facts:
> > 1) this happen rarely
> > 2) this happen only on first boot
> > 3) turning on debug in corosync/pacemaker significantly reduced
> > frequency of this happening, i.e. without debug every ~7 cluster
> > creation, with debug every ~66 cluster creation.
> >
> > This is a 3 nodes cluster on Azure Cloud and it does not seem like the
> > resource agent is reporting an error, because all nodes logs proper "not
> > running" results:
> >
> > The resource in question name is "dbx_head_head".
> >
> > node1)
> > May 19 13:15:41 [6872] olegdbx39-vm-0 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.32 to 2.5.31
> > head.ocf.sh (dbx_head_head)[7717]:
> > 2017/05/19_13:15:42 DEBUG: head_monitor: return 7
> > May 19 13:15:42 [6873] olegdbx39-vm-0   lrmd:debug:
> > operation_finished:dbx_head_head_monitor_0:7717 - exited with rc=7
> > May 19 13:15:42 [6873] olegdbx39-vm-0   lrmd:debug:
> > operation_finished:dbx_head_head_monitor_0:7717:stderr [ -- empty
> -- ]
> > May 19 13:15:42 [6873] olegdbx39-vm-0   lrmd:debug:
> > operation_finished:dbx_head_head_monitor_0:7717:stdout [ -- empty
> -- ]
> > May 19 13:15:42 [6873] olegdbx39-vm-0   lrmd:debug:
> > log_finished:finished - rsc:dbx_head_head action:monitor call_id:14
> > pid:7717 exit-code:7 exec-time:932ms queue-time:0ms
> >
> >
> > node2)
> > May 19 13:15:41 [6266] olegdbx39-vm02 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.31 to 2.5.30
> > head.ocf.sh (dbx_head_head)[6485]:
> > 2017/05/19_13:15:41 DEBUG: head_monitor: return 7
> > May 19 13:15:41 [6267] olegdbx39-vm02   lrmd:debug:
> > operation_finished:dbx_head_head_monitor_0:6485 - exited with rc=7
> > May 19 13:15:41 [6267] olegdbx39-vm02   lrmd:debug:
> > operation_finished:dbx_head_head_monitor_0:6485:stderr [ -- empty
> -- ]
> > May 19 13:15:41 [6267] olegdbx39-vm02   lrmd:debug:
> > operation_finished:dbx_head_head_monitor_0:6485:stdout [ -- empty
> -- ]
> > May 19 13:15:41 [6267] olegdbx39-vm02   lrmd:debug:
> > log_finished:finished - rsc:dbx_head_head action:monitor call_id:14
> > pid:6485 exit-code:7 exec-time:790ms queue-time:0ms
> > May 19 13:15:41 [6266] olegdbx39-vm02 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.32 to 2.5.31
> > May 19 13:15:41 [6266] olegdbx39-vm02 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.33 to 2.5.32
> >
> >
> > node3)
> > ==  the logs here are different - there is no probing, just stop attempt
> > (with proper exit code) ==
> >
> > == reporting not existing resource ==
> >
> > May 19 13:15:29 [6293] olegdbx39-vm03   lrmd:debug:
> > process_lrmd_message:Processed lrmd_rsc_info operation from
> > d2c8a871-410a-4006-be52-ee684c0a5f38: rc=0, reply=0, notify=0
> > May 19 13:15:29 [6293] olegdbx39-vm03   lrmd:debug:
> > process_lrmd_message:Processed lrmd_rsc_exec operation from
> > d2c8a871-410a-4006-be52-ee684c0a5f38: rc=10, reply=1, notify=0
> > May 19 13:15:29 [6293] olegdbx39-vm03   lrmd:debug:
> > log_execute:executing - rsc:dbx_first_datas action:monitor call_id:10
> > May 19 13:15:29 [6293] olegdbx39-vm03   lrmd: info:
> > process_lrmd_get_rsc_info:Resource 'dbx_head_head' not found (2
> > active resources)
>
> FYI, this is normal. It just means the lrmd hasn't been asked to do
> anything with this resource before, so it's not found in the lrmd's memory.
>
> > May 19 13:15:29 [6293] olegdbx39-vm03   lrmd:debug:
> > process_lrmd_message:Processed lrmd_rsc_info operation from
> > d2c8a871-410a-4006-be52-ee684c0a5f38: rc=0, reply=0, notify=0
> > May 19 13:15:29 [6293] olegdbx39-vm03   lrmd: info:
> > process_lrmd_rsc_register:Added 'dbx_head_head' to the rsc list (3
> > active resources)
> > May 19 13:15:40 [6293] olegdbx39-vm03   lrmd:debug:
> > process_lrmd_message:Processed lrmd_rsc_register operation from
> > d2c8a871-410a-4006-be52-ee684c0a5f38: rc=0, reply=1, notify=1
> > May 19 13:15:29 [6292] olegdbx39-vm03 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.9 to 2.5.8
> > May 19 13:15:40 [6292] olegdbx39-vm03 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.10 to 2.5.9
> > May 19 13:15:40 [6292] olegdbx39-vm03 stonith-ng:debug:
> > xml_patch_version_check:Can apply patch 2.5.11 to 2.5.10
> > May 19 13:15:40 [6292] 

Re: [ClusterLabs] failcount is not getiing reset after failure_timeout if monitoring is disabled

2017-05-23 Thread Ken Gaillot
On 05/23/2017 08:00 AM, ashutosh tiwari wrote:
> Hi,
> 
> We are running a two node cluster(Active(X)/passive(Y)) having muliple
> resources of type IpAddr2.
> Running monitor operations for multiple IPAddr2 resource is actually
> hoging the cpu, 
> as we have configured very low value for monitor interval (200 msec).

That is very low. Although times are generally specified in msec in the
pacemaker configuration, pacemaker generally has 1-second granularity in
the implementation, so this is probably treated the same as a 1s interval.

> 
> To avoid this problem ,we are trying to use netlink notification for
> monitoring floating Ip  and updating the failcount for the corresponding
> Ipaddr2 resource using crm_failcount . Along with this we have disabled
> the ipaddr2 monitoring. 

There is a better approach.

Directly modifying fail counts is not a good idea. Fail counts are being
overhauled in pacemaker 1.1.17 and later, and crm_failcount will only be
able to query or delete a failcount, not set or increment it. There
won't be a convenient way to modify a fail count, as we are trying to
discourage that as an implementation detail that can change.

> Thing work fine till here as IPAddr2 resource migrates to other node(Y)
> once failcount equals the migration threshold(1) and Y becomes Active
> due to resource colocation constraints.
> 
> We have configured failure timeout to 3 sec and expected it to clear the
> failcount on the initially active node(X). 
> Problem is that failcount never gets reset on X and thus cluster fails
> to move back to X.

Technically, it's not the fail count that expires, but a particular
failed operation that expires. Even though manually increasing the fail
count will result in recovery actions, if there is no failed operation
in the resource history, then there's nothing to expire.

However, pacemaker does provide a way to do what you want: see the
crm_resource(8) man page for the -F/--fail option. It will record a fake
operation failure in the resource history, and process it as if it were
a real failure. That should do what you want.

> However if we enable the monitoring everything works fine and failcount
> gets reset allowing to fallback.
> 
> 
> Regrds,
> Ashutosh T
FYI, there's an idea for a future feature that could also be helpful
here. We're thinking of creating a new ocf:pacemaker:IP resource agent
that would be based on systemd's networking support. This would allow
pacemaker to be notified by systemd of IP failures without having to
poll. I'm not sure how systemd itself detects the failures. No timeline
on when this might be available, though.

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] both nodes OFFLINE

2017-05-23 Thread 石井 俊直
Hi.

Thanks for reply. And sorry for my report that problem has solved.
As mentioned, corosync versions were not same. “Syncing” versions solved the 
problem.
This was just an installation problem. Although we used Ansible to update the 
rpm file,
there was a failure and we missed it happend. 

> 2017/05/23 7:12、Ken Gaillot のメール:
> 
> On 05/13/2017 01:36 AM, 石井 俊直 wrote:
>> Hi.
>> 
>> We have, sometimes, a problem in our two nodes cluster on CentOS7. Let 
>> node-2 and node-3
>> be the names of the nodes. When the problem happens, both nodes are 
>> recognized OFFLINE
>> on node-3 and on node-2, only node-3 is recognized OFFLINE.
>> 
>> When that happens, the following log message is added repeatedly on node-2 
>> and log file
>> (/var/log/cluster/corosync.log) becomes hundreds of megabytes in short time. 
>> Log message
>> content on node-3 is different.
>> 
>> The erroneous state is temporally solved if OS of node-2 is restarted. On 
>> the other hand,
>> restarting OS of node-3 results in the same state.
>> 
>> I’ve searched content of ML and found a post (Mon Oct 1 01:27:39 CEST 2012) 
>> about
>> "Discarding update with feature set” problem. According to the message, our 
>> problem
>> may be solved by removing /var/lib/pacemaker/crm/cib.* on node-2.
>> 
>> What I want to know is whether removing the above files on just one of the 
>> node is safe ?
>> If there’s other method to solve the problem, I’d like to hear that.
>> 
>> Thanks.
>> 
>> —— from corosync.log  
>> cib:error: cib_perform_op:   Discarding update with feature set 
>> '3.0.11' greater than our own '3.0.10'
> 
> This implies that the pacemaker versions are different on the two nodes.
> Usually, when the pacemaker version changes, the feature set version
> also changes, which means that it introduces new features that won't
> work with older pacemaker versions.
> 
> Running a cluster with mixed pacemaker versions in such a case is
> allowed, but only during a rolling upgrade. Once an older node leaves
> the cluster for any reason, it will not be allowed to rejoin until it is
> upgraded.
> 
> Removing the cib files won't help, since node-2 apparently does not
> support node-3's pacemaker version.
> 
> If that's not the situation you are in, please give more details, as
> this should not be possible otherwise.
> 
>> cib:error: cib_process_request:  Completed cib_replace operation for 
>> section 'all': Protocol not supported (rc=-93, origin=node-3/crmd/12708, 
>> version=0.83.30)
>> crmd:   error: finalize_sync_callback:   Sync from node-3 failed: 
>> Protocol not supported
>> crmd:info: register_fsa_error_adv:   Resetting the current action 
>> list
>> crmd: warning: do_log:   Input I_ELECTION_DC received in state 
>> S_FINALIZE_JOIN from finalize_sync_callback
>> crmd:info: do_state_transition:  State transition S_FINALIZE_JOIN -> 
>> S_INTEGRATION | input=I_ELECTION_DC cause=C_FSA_INTERNAL 
>> origin=finalize_sync_callback
>> crmd:info: crm_update_peer_join: initialize_join: Node node-2[1] - 
>> join-6329 phase 2 -> 0
>> crmd:info: crm_update_peer_join: initialize_join: Node node-3[2] - 
>> join-6329 phase 2 -> 0
>> crmd:info: update_dc:Unset DC. Was node-2
>> crmd:info: join_make_offer:  join-6329: Sending offer to node-2
>> crmd:info: crm_update_peer_join: join_make_offer: Node node-2[1] - 
>> join-6329 phase 0 -> 1
>> crmd:info: join_make_offer:  join-6329: Sending offer to node-3
>> crmd:info: crm_update_peer_join: join_make_offer: Node node-3[2] - 
>> join-6329 phase 0 -> 1
>> crmd:info: do_dc_join_offer_all: join-6329: Waiting on 2 outstanding 
>> join acks
>> crmd:info: update_dc:Set DC to node-2 (3.0.10)
>> crmd:info: crm_update_peer_join: do_dc_join_filter_offer: Node node-2[1] 
>> - join-6329 phase 1 -> 2
>> crmd:info: crm_update_peer_join: do_dc_join_filter_offer: Node node-3[2] 
>> - join-6329 phase 1 -> 2
>> crmd:info: do_state_transition:  State transition S_INTEGRATION -> 
>> S_FINALIZE_JOIN | input=I_INTEGRATED cause=C_FSA_INTERNAL 
>> origin=check_join_state
>> crmd:info: crmd_join_phase_log:  join-6329: node-2=integrated
>> crmd:info: crmd_join_phase_log:  join-6329: node-3=integrated
>> crmd:  notice: do_dc_join_finalize:  Syncing the Cluster Information Base 
>> from node-3 to rest of cluster | join-6329
>> crmd:  notice: do_dc_join_finalize:  Requested version   > crm_feature_set="3.0.11" validate-with="pacemaker-2.5" epoch="84" 
>> num_updates="1" admin_epoch="0" cib-last-written="Thu May 11 08:05:45 2017" 
>> update-origin="node-2" update-client="crm_resource" update-user="root" 
>> have-quorum="1"/>
>> cib: info: cib_process_request:  Forwarding cib_sync operation for 
>> section 'all' to node-3 (origin=local/crmd/12710)
>> cib: info: cib_process_replace:  Digest matched on replace from node-3: 
>> 85a19c7927c54ccb15794f2720e07ce1
>> cib: info: 

[ClusterLabs] Pacemaker 1.1.17 Release Candidate 2

2017-05-23 Thread Ken Gaillot
The second release candidate for Pacemaker version 1.1.17 is now
available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.17-rc2

This release contains multiple fixes related to the new bundle feature,
plus:

* A regression introduced in Pacemaker 1.1.15 has been discovered and
fixed. When a Pacemaker Remote connection needed to be recovered, any
actions on that node were not ordered after the connection recovery,
potentially leading to unnecessary failures and recovery actions before
arriving at the correct state.

* The fencing daemon monitors the cluster configuration for constraints
related to fence devices, to know whether to enable or disable them on
the local node. Previously, after reading the initial configuration, it
could detect later changes or removals of constraints, but not
additions. Now, it can.
-- 
Ken Gaillot 

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] failcount is not getiing reset after failure_timeout if monitoring is disabled

2017-05-23 Thread ashutosh tiwari
Hi,

We are running a two node cluster(Active(X)/passive(Y)) having muliple
resources of type IpAddr2.
Running monitor operations for multiple IPAddr2 resource is actually hoging
the cpu,
as we have configured very low value for monitor interval (200 msec).

To avoid this problem ,we are trying to use netlink notification for
monitoring floating Ip  and updating the failcount for the corresponding
Ipaddr2 resource using crm_failcount . Along with this we have disabled the
ipaddr2 monitoring.


Thing work fine till here as IPAddr2 resource migrates to other node(Y)
once failcount equals the migration threshold(1) and Y becomes Active due
to resource colocation constraints.

We have configured failure timeout to 3 sec and expected it to clear the
failcount on the initially active node(X).
Problem is that failcount never gets reset on X and thus cluster fails to
move back to X.


However if we enable the monitoring everything works fine and failcount
gets reset allowing to fallback.


Regrds,
Ashutosh T
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] "Connecting" Pacemaker with another cluster manager

2017-05-23 Thread Timo
On 05/23/2017 09:44 AM, Kristoffer Grönlund wrote:
> Timo  writes:
> 
>> Hi,
>>
>> I have a proprietary cluster manager running on a bunch (four) of nodes.
>> It decides to run the daemon for which HA is required on its own set of
>> (undisclosed) requirements and decisions. This is, unfortunately,
>> unavoidable due to business requirements.
>>
>> However, I have to put also Pacemaker onto the nodes in order to provide
>> an additional daemon running in HA mode. (I cannot do this using the
>> existing cluster manager, as this is a closed system.)
>>
>> I have to make sure that the additional daemon (which I plan to
>> coordinate using Pacemaker) only runs on the machine where the daemon
>> (controlled by the existing, closed cluster manager) runs. I could check
>> for local VIPs, for example, to check whether it runs on a node or not.
>>
>> Is there any way to make Pacemaker "check" for existence of a local
>> (V)IP so that I could "connect" both cluster managers?
>>
>> In short: I need Pacemaker to put the single instance of a daemon
>> exactly onto the node the other cluster manager decided to run the
>> (primary) daemon.
> 
> Hi,
> 
> I'm not sure I completely understand the problem description, but if I
> parsed it correctly:
> 
> What you can do is run an external script which sets a node attribute on
> the node that has the external cluster manager daemon, and have a
> constraint which locates the additional daemon based on that node
> attribute.

Exactly what I needed! Thanks!

Timo

> Cheers,
> Kristoffer
> 
>>
>> Best regards,
>>
>> Timo
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> 

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] "Connecting" Pacemaker with another cluster manager

2017-05-23 Thread Kristoffer Grönlund
Timo  writes:

> Hi,
>
> I have a proprietary cluster manager running on a bunch (four) of nodes.
> It decides to run the daemon for which HA is required on its own set of
> (undisclosed) requirements and decisions. This is, unfortunately,
> unavoidable due to business requirements.
>
> However, I have to put also Pacemaker onto the nodes in order to provide
> an additional daemon running in HA mode. (I cannot do this using the
> existing cluster manager, as this is a closed system.)
>
> I have to make sure that the additional daemon (which I plan to
> coordinate using Pacemaker) only runs on the machine where the daemon
> (controlled by the existing, closed cluster manager) runs. I could check
> for local VIPs, for example, to check whether it runs on a node or not.
>
> Is there any way to make Pacemaker "check" for existence of a local
> (V)IP so that I could "connect" both cluster managers?
>
> In short: I need Pacemaker to put the single instance of a daemon
> exactly onto the node the other cluster manager decided to run the
> (primary) daemon.

Hi,

I'm not sure I completely understand the problem description, but if I
parsed it correctly:

What you can do is run an external script which sets a node attribute on
the node that has the external cluster manager daemon, and have a
constraint which locates the additional daemon based on that node
attribute.

Cheers,
Kristoffer

>
> Best regards,
>
> Timo
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] "Connecting" Pacemaker with another cluster manager

2017-05-23 Thread Timo
Hi,

I have a proprietary cluster manager running on a bunch (four) of nodes.
It decides to run the daemon for which HA is required on its own set of
(undisclosed) requirements and decisions. This is, unfortunately,
unavoidable due to business requirements.

However, I have to put also Pacemaker onto the nodes in order to provide
an additional daemon running in HA mode. (I cannot do this using the
existing cluster manager, as this is a closed system.)

I have to make sure that the additional daemon (which I plan to
coordinate using Pacemaker) only runs on the machine where the daemon
(controlled by the existing, closed cluster manager) runs. I could check
for local VIPs, for example, to check whether it runs on a node or not.

Is there any way to make Pacemaker "check" for existence of a local
(V)IP so that I could "connect" both cluster managers?

In short: I need Pacemaker to put the single instance of a daemon
exactly onto the node the other cluster manager decided to run the
(primary) daemon.

Best regards,

Timo

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] two node cluster. each node shows other node offline.

2017-05-23 Thread Jimmy Prescott
Hello all,

I have two nginx nodes running nginx version: nginx/1.11.10
(nginx-plus-r12-p2), Corosync Cluster Engine, version '2.3.5',
and Pacemaker 1.1.14 on Ubuntu 16.04.1 LTS.

This cluster is intended to replace our old nginx cluster running on 14.04
and older versions of corosync/pacemaker.

On initial set up of the cluster everything works wonderfully and I can put
a node on standby and failover works as expected. However if I reboot one
of the nodes the cluster gets into a split situation where each node thinks
the other node is offline. I've tried numerous things to correct it but I
cannot get them to both show as online.

crm status from nginx1:

root@prod-nginx1:~# crm status
Online: [ prod-nginx1 ]
OFFLINE: [ prod-nginx2 ]

Full list of resources:

 ClusterIP (ocf::heartbeat:IPaddr2): Started prod-nginx1
 ClusterIPRestricted (ocf::heartbeat:IPaddr2): Started prod-nginx1
 Nginx (ocf::heartbeat:nginx): Started prod-nginx1

and crm status from nginx2:

root@prod-nginx2:~# crm status
Online: [ prod-nginx2 ]
OFFLINE: [ prod-nginx1 ]

Full list of resources:

 ClusterIP (ocf::heartbeat:IPaddr2): Started prod-nginx2
 ClusterIPRestricted (ocf::heartbeat:IPaddr2): Started prod-nginx2
 Nginx (ocf::heartbeat:nginx): Started prod-nginx2

I've tried forcing the nodes back online, restarting both pacemaker and
corosync on both servers, but nothing seems to work. I do not have this
issue with corosync/pacemaker on ubuntu 14.04.

Here is the current corosync.conf which works on ubuntu 14.04

totem {
version: 2
secauth: on
cluster_name: pacemaker1
transport: udpu
token: 1000
token_retransmits_before_loss_const: 10
}

nodelist {
node {
ring0_addr: 10.10.16.100
nodeid: 101
}
node {
ring0_addr: 10.10.16.101
nodeid: 102
}
}

quorum {
provider: corosync_votequorum
two_node: 1
wait_for_all: 1
last_man_standing: 1
auto_tie_breaker: 0
}

logging {
# Log the source file and line where messages are being
# generated. When in doubt, leave off. Potentially useful for
# debugging.
fileline: off
# Log to standard error. When in doubt, set to no. Useful when
# running in the foreground (when invoking "corosync -f")
to_stderr: no
# Log to a log file. When set to "no", the "logfile" option
# must not be set.
to_logfile: yes
logfile: /var/log/corosync/corosync.log
# Log to the system log daemon. When in doubt, set to yes.
to_syslog: yes
# Log debug messages (very verbose). When in doubt, leave off.
debug: off
# Log messages with time stamps. When in doubt, set to on
# (unless you are only logging to syslog, where double
# timestamps can be annoying).
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org