Thanks for your response Ken. I'm puzzled ... in my case node remain
UNCLEAN (offline) until dc-deadtime expires, even when both nodes are up
and corosync is quorate.
I see the following from crmd when I have dc-deadtime=2min
Dec 15 21:34:33 max04 crmd[13791]: notice: Quorum acquired
Dec 15
> On 12/15/2016 02:02 PM, al...@amisw.com wrote:
>> primitive ip_apache_localnet ocf:heartbeat:IPaddr2 params ip="10.0.0.99"
>> cidr_netmask="32" op monitor interval="30s"
>> clone cl_ip_apache_localnet ip_apache_localnet \
>> meta globally-unique="true" clone-max="3" clone-node-max="1"
>
On 12/15/2016 02:00 PM, Chris Walker wrote:
> Hello,
>
> I have a quick question about dc-deadtime. I believe that Digimer and
> others on this list might have already addressed this, but I want to
> make sure I'm not missing something.
>
> If my understanding is correct, dc-deadtime sets the
On 12/15/2016 02:02 PM, al...@amisw.com wrote:
>>
>> Seeing your configuration might help. Did you set globally-unique=true
>> and clone-node-max=3 on the clone? If not, the other nodes can't pick up
>> the lost node's share of requests.
>
> Yes for both, I have globally-unique=true, and I change
>
> Seeing your configuration might help. Did you set globally-unique=true
> and clone-node-max=3 on the clone? If not, the other nodes can't pick up
> the lost node's share of requests.
Yes for both, I have globally-unique=true, and I change clone-node-max=3
to clone-node-max=2, and now, as I
Hello,
I have a quick question about dc-deadtime. I believe that Digimer and
others on this list might have already addressed this, but I want to make
sure I'm not missing something.
If my understanding is correct, dc-deadtime sets the amount of time that
must elapse before a cluster is formed
On 12/15/2016 12:37 PM, al...@amisw.com wrote:
> Hi,
>
> I got some trouble since one week and can't find solution by myself. Any
> help will be really appreciated !
> I use corosync / pacemaker for 3 or 4 years and all works well, for
> failover or load-balancing.
>
> I have shared ip between 3
Hi,
I got some trouble since one week and can't find solution by myself. Any
help will be really appreciated !
I use corosync / pacemaker for 3 or 4 years and all works well, for
failover or load-balancing.
I have shared ip between 3 servers, and need to remove one for upgrade.
But after I
Thanks for detailed explanation. This was very helpful.
I had removed option
startup-fencing: false
now the warning message was gone.
On 14/12/16 20:12, Ken Gaillot wrote:
On 12/14/2016 11:14 AM, Denis Gribkov wrote:
Hi Everyone,
Our company have 15-nodes asynchronous cluster without
Hello Guys,
is it possible to list only failed Resources as an array?
Because i want to cleanup the failed Resources with a for loop.
Kind Regards
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
On 12/12/16 21:36 +0100, Jan Pokorný wrote:
> Changelog highlights for v0.59.7 (also available as a tag message):
>
> - bug fix release (bash completion + shebangs, regard resource-agents version)
> - bug fixes:
> . output of {ccs,pcs}2pcscmd commands could previously confuse users
> as to
11 matches
Mail list logo