On Wed, Dec 14, 2016, at 02:03 AM, Ulrich Windl wrote:
> >>> Christopher Harvey <c...@eml.cc> schrieb am 13.12.2016 um 16:57 in
> >>> Nachricht
> <1481644670.3264872.817667121.13e97...@webmail.messagingengine.com>:
> > I was wondering if it is po
I was wondering if it is possible to tell pacemaker to store the cib.xml
file in a specific directory. I looked at the code and searched the web
a bit and haven't found anything. I just wanted to double check here in
case I missed anything.
Thanks,
Chris
I was wondering if it is possible to ask pacemaker to add a resource
constraint and make sure that the majority of the cluster sees this
constraint modification or fail if quorum is not achieved.
This is from within the context of a program issuing pacemaker commands,
not an operator, so race
On Thu, Sep 29, 2016, at 12:20 PM, Jan Pokorný wrote:
> On 28/09/16 16:55 -0500, Ken Gaillot wrote:
> > On 09/28/2016 04:04 PM, Christopher Harvey wrote:
> >> My corosync/pacemaker logs are seeing a bunch of messages like the
> >> following:
> >>
> >>
My corosync/pacemaker logs are seeing a bunch of messages like the
following:
Sep 22 14:50:36 [1346] node-132-60 crmd: info:
action_synced_wait: Managed MsgBB-Active_meta-data_0 process 15613
exited with rc=4
Sep 22 14:50:36 [1346] node-132-60 crmd:error:
I'm surprised I'm having such a hard time figuring this out on my own.
I'm running pacemaker 1.1.13 and corosync-2.3.4 and want to change the
location of pacemaker.log.
By default it is located in /var/log.
I looked in corosync.c and found the following lines:
get_config_opt(config,
On Thu, Apr 14, 2016, at 11:12 AM, Ken Gaillot wrote:
> On 04/14/2016 09:33 AM, Christopher Harvey wrote:
> > MsgBB-Active is a dummy resource that simply returns OCF_SUCCESS on
> > every operation and logs to a file.
>
> That's a common mistake, and will confuse the cluster.
actually, toggling vmr-132-5 in the following simpler setup produces the
same service flap as before.
Cluster Name:
Corosync Nodes:
192.168.132.5 192.168.132.4 192.168.132.3
Pacemaker Nodes:
vmr-132-3 vmr-132-4 vmr-132-5
Resources:
Resource: MsgBB-Active (class=ocf provider=solace
I have a 3 node cluster (see the bottom of this email for 'pcs config'
output) with 3 nodes. The MsgBB-Active and AD-Active service both flap
whenever a node joins or leaves the cluster. I trigger the leave and
join with a pacemaker service start and stop on any node.
Here is the happy steady
I am able to create a split brain situation in corosync 1.1.13 using
iptables in a 3 node cluster.
I have 3 nodes, vmr-132-3, vmr-132-4, and vmr-132-5
All nodes are operational and form a 3 node cluster with all nodes are
members of that ring.
vmr-132-3 ---> Online: [ vmr-132-3 vmr-132-4
have happen
> > if a normal switch to other node as in cluster would have happened ...
> >
> > i see fencing is not a solution its only required to forcefully take
> > control which is not the case always
> >
> > On Thu, Mar 17, 2016 at 12:49 PM, Ulrich Windl
>
On Wed, Mar 16, 2016, at 04:00 PM, Digimer wrote:
> On 16/03/16 03:59 PM, Christopher Harvey wrote:
> > I am able to create a split brain situation in corosync 1.1.13 using
> > iptables in a 3 node cluster.
> >
> > I have 3 nodes, vmr-132-3, vmr-132-4, and v
On Thu, Mar 17, 2016, at 06:24 PM, Ken Gaillot wrote:
> On 03/17/2016 05:10 PM, Christopher Harvey wrote:
> > If I ignore pacemaker's existence, and just run corosync, corosync
> > disagrees about node membership in the situation presented in the first
> > email. While it's t
13 matches
Mail list logo