On 2013-09-13T12:20:54, Xiaomin Zhang wrote:
> Hi, Gurus:
> Here's a question about service Monitor Interval: considering this value is
> configured as '15' seconds, does this mean corosync/pacemaker will take
> average 15 seconds to schedule failed resource on a ready node?
It'll take about a m
Hi, Gurus:
Here's a question about service Monitor Interval: considering this value is
configured as '15' seconds, does this mean corosync/pacemaker will take
average 15 seconds to schedule failed resource on a ready node?
Thanks.
___
Pacemaker mailing l
Hi
2013/9/12 Eloy Coto Pereiro :
> Hi,
>
> Thanks for your help, I use the same example. In this case Kamailio need to
> start after postgresql. But this is not a problem I think, the replication
> work ok without corosync. I stop all process and start to work with
> corosync.
>
> When I start cor
Hi,
Thanks for your help, I use the same example. In this case Kamailio need to
start after postgresql. But this is not a problem I think, the replication
work ok without corosync. I stop all process and start to work with
corosync.
When I start corosync I see this log in my slave:
Sep 12 16:12:
Hi all
I use resource_set and require-all='false' as follows
Dummy1 and dummy2 have "sleep 5" on start.
and
Hi Eloy
2013/9/12 Eloy Coto Pereiro :
> Hi,
>
> I have issues with this config, for example when master is running corosync
> service use pg_ctl. But in the slave pg_ctl doesn't start and replication
> doesn't work.
>
> This is my data:
>
>
> Online: [ master slave ]
> OFFLINE: [ ]
>
> Full list
Hi,
I have issues with this config, for example when master is running corosync
service use pg_ctl. But in the slave pg_ctl doesn't start and replication
doesn't work.
This is my data:
Online: [ master slave ]
OFFLINE: [ ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started ma
On 2013-09-12T16:56:35, Andrew Beekhof wrote:
> > The most directly equivalent solution would be to number the per-node
> > in-flight operations similar to what migration-threshold does. (I think
> > we can safely continue to treat all resources as equal to start with.)
> Agreed. Perhaps even re
No idea on this one?
Sent: Tuesday, September 10, 2013 8:07 AM
To: pacemaker@oss.clusterlabs.org
Subject: [Pacemaker] Mysql multiple slaves, slaves restarting occasionally
without a reason
Hi,
We have a Mysql cluster which works fine when I have a single master ("A") and
slave ("B"). Failover
On 09/09/2013, at 6:46 PM, Heikki Manninen wrote:
> Hello Andreas, thanks for your input, much appreciated.
>
> On 5.9.2013, at 16.39, "Andreas Mock" wrote:
>
>> 1) The second output of crm_mon show a resource IP_database
>> which is not shown in the initial crm_mon output and also
>> not in
On 11/09/2013, at 2:57 PM, Andrey Groshev wrote:
> Hello Christine, Andrew and all.
>
> I'm sorry - a little was unwell, so did not answer.
> What we end this stream of messages?
> Who will change? corosync or pacemaker?
For now make sure you specify a nodeid and name.
Longer term, Chrissie is
On 12/09/2013, at 4:44 PM, Lars Marowsky-Bree wrote:
> On 2013-09-12T14:34:02, Andrew Beekhof wrote:
>
>>> Well, they're all doing something completely different.
>> No, they're all crude approximations designed to stop the cluster as a whole
>> from using up so much cpu/network/etc that reco
12 matches
Mail list logo