- Description has changed:

Diff:

~~~~

--- old
+++ new
@@ -2,4 +2,42 @@
 
 And if user broadcasts another smaller package which does not require to be 
fragmented after that big one, there is possibility the smaller package will 
arrive at destinations before the previous large one as those are transmitted 
in 02 different channels,  unicast and broadcast;  the transport layer (e.g. 
tipc) does not guarantee the order in such case.
 
+More details:
+
+At MDS user, do:
+1) broadcast a large msg1. 
+2) broadcast a smaller msg2
+
+Then, at MDS layer:
+with msg1:
+~~~
+loop: Iterate over destinations having same svd-id
+               msg1 is fragmented into smaller ones. e.g: msg1 = msg11 + msg12 
+ msg13
+               tipc unicast msg11 to a destination
+               tipc unicast msg12 to a destination
+               tipc unicast msg13 to an destination
+end loop
+~~~
+
+with msg2:
+`tipc mcast msg2 to all destination having same svd-id in one go.`
+               
+as msg1x fragments are transfered with unicast type that is different from 
msg2 transfered with mcast type, there is chance that msg2 may arrive at 
desinations sooner than msg1.
+
+The patch of this ticket basically does:
+
+with msg1:
+~~~
+msg1 is fragmented into smaller ones. e.g: msg1 = msg11 + msg12 + msg13
+tipc mcast msg11 to all adest in one go
+tipc mcast msg12 to all adest in one go
+tipc mcast msg13 to all adest in go go
+~~~
+
+with msg2:
+`tipc mcast msg2 to all destination having same svd-id in one go.`
+
+With that, all messages including fragmented are transfered with the same 
mcast type,
+therefore the message order msg2->msg13->msg12->msg1 (later->sooner) is 
guaranteed at receiver sides.
+
 To reproduce the issue, continuously broadcasting a small package after the 
large one in a large cluster.

~~~~




---

** [tickets:#3033] mds: order is not guaranteed if broadcasting a large 
package**

**Status:** review
**Milestone:** 5.19.06
**Created:** Mon Apr 22, 2019 03:18 AM UTC by Vu Minh Nguyen
**Last Updated:** Wed Apr 24, 2019 07:19 AM UTC
**Owner:** Thuan


When *broadcasting a large package* which is over maximum of MDS direct buffer 
size ( ~ 65K bytes) to list of receivers, the package will be fragmented into 
smaller chunks at MDS layer, and then *unicast* each of them to a specific 
receiver; that process is repeated over the receiver list.

And if user broadcasts another smaller package which does not require to be 
fragmented after that big one, there is possibility the smaller package will 
arrive at destinations before the previous large one as those are transmitted 
in 02 different channels,  unicast and broadcast;  the transport layer (e.g. 
tipc) does not guarantee the order in such case.

More details:

At MDS user, do:
1) broadcast a large msg1. 
2) broadcast a smaller msg2

Then, at MDS layer:
with msg1:
~~~
loop: Iterate over destinations having same svd-id
                msg1 is fragmented into smaller ones. e.g: msg1 = msg11 + msg12 
+ msg13
                tipc unicast msg11 to a destination
                tipc unicast msg12 to a destination
                tipc unicast msg13 to an destination
end loop
~~~

with msg2:
`tipc mcast msg2 to all destination having same svd-id in one go.`
                
as msg1x fragments are transfered with unicast type that is different from msg2 
transfered with mcast type, there is chance that msg2 may arrive at desinations 
sooner than msg1.

The patch of this ticket basically does:

with msg1:
~~~
msg1 is fragmented into smaller ones. e.g: msg1 = msg11 + msg12 + msg13
tipc mcast msg11 to all adest in one go
tipc mcast msg12 to all adest in one go
tipc mcast msg13 to all adest in go go
~~~

with msg2:
`tipc mcast msg2 to all destination having same svd-id in one go.`

With that, all messages including fragmented are transfered with the same mcast 
type,
therefore the message order msg2->msg13->msg12->msg1 (later->sooner) is 
guaranteed at receiver sides.

To reproduce the issue, continuously broadcasting a small package after the 
large one in a large cluster.


---

Sent from sourceforge.net because opensaf-tickets@lists.sourceforge.net is 
subscribed to https://sourceforge.net/p/opensaf/tickets/

To unsubscribe from further messages, a project admin can change settings at 
https://sourceforge.net/p/opensaf/admin/tickets/options.  Or, if this is a 
mailing list, you can unsubscribe from the mailing list.
_______________________________________________
Opensaf-tickets mailing list
Opensaf-tickets@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets

Reply via email to