On 08/30/2016 01:58 PM, chenhj wrote:
> Hi,
>
> This is a continuation of the email below(I did not subscrib this maillist)
>
> http://clusterlabs.org/pipermail/users/2016-August/003838.html
>
>>>From the above, I suspect that the node with the network loss was the
>>DC, and from its point of
Hi,
This is a continuation of the email below(I did not subscrib this maillist)
http://clusterlabs.org/pipermail/users/2016-August/003838.html
>>From the above, I suspect that the node with the network loss was the
>DC, and from its point of view, it was the other node that went away.
Yes.
On 08/27/2016 09:15 PM, chenhj wrote:
> Hi all,
>
> When i use the following command to simulate data lost of network at one
> member of my 3 nodes Pacemaker+Corosync cluster,
> sometimes it cause Pacemaker on another node exit.
>
> tc qdisc add dev eth2 root netem loss 90%
>
> Is there any
On 08/28/2016 04:15 AM, chenhj wrote:
> Hi all,
>
> When i use the following command to simulate data lost of network at
> one member of my 3 nodes Pacemaker+Corosync cluster,
> sometimes it cause Pacemaker on another node exit.
>
> tc qdisc add dev eth2 root netem loss 90%
>
> Is there any
Hi all,
When i use the following command to simulate data lost of network at one member
of my 3 nodes Pacemaker+Corosync cluster,
sometimes it cause Pacemaker on another node exit.
tc qdisc add dev eth2 root netem loss 90%
Is there any method to avoid this proleam?
[root@node3 ~]# ps