> On Apr 26, 2018, at 09:26, Jonathan Morton <chromati...@gmail.com> wrote:
> 
>> Genuine question:  I have a superpacket circa 64K, this is a lump of data in 
>> a tcp flow.  I have another small VOIP packet, it’s latency sensitive.  If I 
>> split the super packet into individual 1.5K packets as they would be on the 
>> wire, I can insert my VOIP packet at suitable place in time such that jitter 
>> targets are not exceeded.  If I don’t split the super packet, surely I have 
>> to wait till the end of the superpacket’s queue (for want of a better word) 
>> and possibly exceed my latency target.  That looks to me like ‘GSO/TSO’ is 
>> potentially bad for interflow latencies.
> 
>> What don’t I understand here?
> 
> You have it exactly right.  For some reason, Eric is failing to consider the 
> general case of flow-isolating queues at low link rates, and only considering 
> high-rate FIFOs.
> 
> More to the point, at 64Kbps a maximal GSO packet can take 8 seconds to 
> clear, while an MTU packet takes less than a quarter second!

        I venture a guess that it is not so much the concept but rather the 
terminology that irritates Eric? If at all Superpackets lighten the kernel's 
load and will in all likelihood make in-kernel processing faster which if at 
all decreases latency. The effect on flow fairness that we are concerned about 
might simply not register under "latency" for the kernel developers; I am sure 
they see the issue and this is really about which words to use to describe 
this... Also I point at the fact that Eric did not object to the feature per 
se, but rather the unconditionality and the rationale.
Now, I also have no better description than saying that each packet delivered 
will cause HOL blocking for all other packets and the larger the current packet 
the longer the other packets need to wait so inter-flow latencies seem pretty 
much affected by GSO. I really liked you initial idea to make the threshold 
when to segment a superpacket based on the duration that packet would hogg the 
wire/shaper, as that gives an intuitive feel for the worst case inter-flow 
latency induced. Especially this would allow on many links intermediate sized 
superpackets to survive fine while turning 64K "oil-tankers" into a fleet of 
speedboats ;). This temporal threshold would also automatically solve the 
higher bandwdth cases elegantly. What was the reason to rip that out again?

E.g.: if we allow for 1 ms serialization delay
64000*8 / (1 * 1000 * 1000 * 1000) = 0.000512
X = (64000*8)/0.001 = 512000000 / 10^6  = 512 Mbps
We will still be able to send unsegmented maximal sized superpackets in <= 1ms 
with speeds >= 512 mbps (denoting that the 1 Gbps threshold aims for ~ 0.5 ms 
serialization delay)

Best Regards
        Sebastian
> 
> - Jonathan Morton
> 
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake

_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to