Is there any guidance table for window_size and max_messages with different
node configuration. Or do I have to experiment each time to get the correct
values.
On Mon, Jan 2, 2017 at 5:35 PM, Jan Friesse wrote:
> Hello,
>>
>> I have a four node cluster. Each node connected
Hello,
I have a four node cluster. Each node connected with a centralized switch.
MTU size is default, 1500. On each node, a program continuously tries to
multi-cast as many messages as possible. With the default settings
(corosync.conf), buffer overflow does *not* occur till program runs on
' property. I believe CPG_TYPE_SAFE
implementation is required not only to guarantee that messages are received
at all the process but also to guarantee the ordering of configuration
messages in network partition.
--
Satish
On Mon, Jun 6, 2016 at 9:50 PM, satish kumar <satish.kr2...@gmail.com>
gt;> Regards,
>> Satish
>>
>> On Mon, Jun 6, 2016 at 8:10 PM, Jan Friesse <jfrie...@redhat.com> wrote:
>>
>> satish kumar napsal(a):
>>>
>>> Hello honza, thanks for the response !
>>>
>>>>
>>>> With state sync,
But C1 is *guaranteed *to deliver *before *m(k)? No case where C1 is
delivered after m(k)?
Regards,
Satish
On Mon, Jun 6, 2016 at 8:10 PM, Jan Friesse <jfrie...@redhat.com> wrote:
> satish kumar napsal(a):
>
> Hello honza, thanks for the response !
>>
>> With state
Hello honza, thanks for the response !
With state sync, I simply mean that 'k-1' messages were delivered to N1, N2
and N3 and they have applied these messages to change their program state.
N1.state = apply(m(k-1);
N2.state = apply(m(k-1);
N3.state = apply(m(k-1);
The document you shared cleared
Hello,
Virtual Synchrony Property - messages are delivered in agreed order and
configuration changes are delivered in agreed order relative to message.
What happen to this property when network is partitioned the cluster into
two. Consider following scenario (which I took from one of the