- **status**: review --> fixed
- **Comment**:

commit d07d6f3363fae78d579bbb37366cc5b814d35f0b (HEAD -> develop, 
origin/develop)
Author: thuan.tran <thuan.t...@dektech.com.au>
Date:   Wed Apr 24 11:31:29 2019 +0700

    mds: use TIPC multicast for fragmented messages [#3033]

    - Sender may send broadcast big messages (> 65K) then small messages (< 
65K).
    Current MDS just loop via all destinations to unicast all fragmented 
messages
    to one by one destinations. But sending multicast non-fragment messages to 
all
    destinations. Therefore, receivers may get messages with incorrect order,
    non-fragment messages may come before fragmented messages.
    For example, it may lead to OUT OF ORDER for IMMNDs during IMMD sync.

    - Solution: use TIPC multicast instead of unicast for each fragmented 
messages
    to avoid disorder of arrived broadcast messages.




---

** [tickets:#3033] mds: order is not guaranteed if broadcasting a large 
package**

**Status:** fixed
**Milestone:** 5.19.06
**Created:** Mon Apr 22, 2019 03:18 AM UTC by Vu Minh Nguyen
**Last Updated:** Fri Apr 26, 2019 08:50 AM UTC
**Owner:** Thuan


When *broadcasting a large package* which is over maximum of MDS direct buffer 
size ( ~ 65K bytes) to list of receivers, the package will be fragmented into 
smaller chunks at MDS layer, and then *unicast* each of them to a specific 
receiver; that process is repeated over the receiver list.

And if user broadcasts another smaller package which does not require to be 
fragmented after that big one, there is possibility the smaller package will 
arrive at destinations before the previous large one as those are transmitted 
in 02 different channels,  unicast and broadcast;  the transport layer (e.g. 
tipc) does not guarantee the order in such case.

More details:

At MDS user, do:
1) broadcast a large msg1. 
2) broadcast a smaller msg2

Then, at MDS layer:
with msg1:
~~~
loop: Iterate over destinations having same svd-id
                msg1 is fragmented into smaller ones. e.g: msg1 = msg11 + msg12 
+ msg13
                tipc unicast msg11 to a destination
                tipc unicast msg12 to a destination
                tipc unicast msg13 to an destination
end loop
~~~

with msg2:
`tipc mcast msg2 to all destination having same svd-id in one go.`
                
as msg1x fragments are transfered with unicast type that is different from msg2 
transfered with mcast type, there is chance that msg2 may arrive at desinations 
sooner than msg1.

The patch of this ticket basically does:

with msg1:
~~~
msg1 is fragmented into smaller ones. e.g: msg1 = msg11 + msg12 + msg13
tipc mcast msg11 to all adest in one go
tipc mcast msg12 to all adest in one go
tipc mcast msg13 to all adest in go go
~~~

with msg2:
`tipc mcast msg2 to all destination having same svd-id in one go.`

With that, all messages including fragmented are transfered with the same mcast 
type,
therefore the message order msg2->msg13->msg12->msg1 (later->sooner) is 
guaranteed at receiver sides.

To reproduce the issue, continuously broadcasting a small package after the 
large one in a large cluster.


---

Sent from sourceforge.net because opensaf-tickets@lists.sourceforge.net is 
subscribed to https://sourceforge.net/p/opensaf/tickets/

To unsubscribe from further messages, a project admin can change settings at 
https://sourceforge.net/p/opensaf/admin/tickets/options.  Or, if this is a 
mailing list, you can unsubscribe from the mailing list.
_______________________________________________
Opensaf-tickets mailing list
Opensaf-tickets@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets

Reply via email to