Hi Alan,

On Jun 11, 2015, at 03:05 , Alan Jenkins <alan.christopher.jenk...@gmail.com> 
wrote:

> On 10/06/15 21:54, Sebastian Moeller wrote:
>> Hi Dave,
>> 
>> 
>> On Jun 10, 2015, at 21:53 , Dave Taht <dave.t...@gmail.com> wrote:
>> 
>>> http://dl.ifip.org/db/conf/networking/networking2015/1570064417.pdf
>>> 
>>> gargoyle's qos system follows a similar approach, using htb + sfq, and
>>> a short ttl udp flow.
>>> 
>>> Doing this sort of measured, then floating the rate control with
>>> "cake" would be fairly easy (although it tends to be a bit more
>>> compute intensive not being on a fast path)
>>> 
>>> What is sort of missing here is trying to figure out which side of the
>>> bottleneck is the bottleneck (up or down).
>>      Yeah, they relay on having a reliable packet reflector upstream of the 
>> “bottleneck” so they get their timestamped probe packets returned.
> 
> They copy & frob real IP headers.  They don't _say_ how the reflection works, 
> but I guess low TTL -> ICMP TTL exceeded, like traceroute.  Then I read 
> Gargoyle also use ICMP TTL exceeded and I thought my guess is quite educated 
> 8).

        Daniel elucidated their magic packets: they create self-addressed IP 
packets at the simulated CPE and inject them in the simulated cable link; the 
other end will pass the data through its stack and once the 
sender-self-addressed packet reaches the IP-layer of the simulated CMTS it gets 
send back, since that IP layer sees the CPE’s IP address as the to address.
        @Daniel, this trick can only work if a) the magic packets are only 
passed one IP-hop since the first upstream IP-layer will effectively bounce 
them back (so the injector in the docsis case needs to be the cable modem) b) 
the CPE actually has an IP that can be reached from the outside and that is 
known to the person setting up your AQM, is that correct? How does this work if 
the CPE acts as an ethernet bridge without an external IP?

> 
> Note the size of the timestamp, a generous 8 bytes.  It "just happens" that 
> ICMP responses are required to include the first 8 bytes of the IP payload 8).
> 
>>  In the paper they used either uplink or downlink traffic so figuring where 
>> the bottleneck was easy at least this is how I interpret “Experiments were 
>> performed in the upload (data flowing from the users to the CDNs) as well as 
>> in the download direction.". At least this is what I get from their short 
>> description in glossing over the paper.
> 
> Ow!  I hadn't noticed that.  You could reduce both rates proportionally but 
> the effect is inelegant.

        I think that it what they do, as long as one only measures 
uni-directional saturating traffic this approach will work fine as the 
bandwidth loss in the opposite direction simply does not materialize.

>  I wonder what Gargoyle does...
> 
> 2012 gargoyle developer comment says "There are not settings for active 
> congestion control on the uplink side. ACC concentrats on the download side 
> only."
> 
> Random blog post points out this is sufficient to fix prioritization v.s. 
> bufferbloat.  "In upstream direction this is not a big problem because your 
> router can still prioritize which packet should be sent first".  (Yay, I get 
> to classify every application I care about /s and still get killed by uploads 
> in http).

        Not fully convinced that this is fully sane, as in cable systems the 
upstream bandwidth can fluctuate significantly depending on how many people are 
active. Actually scratch the “cable” since most customer links have shared 
oversubscribed links somewhere between the CPE and the internet that will make 
static bandwidth shaping mis-behave some of the time. A good ISP just manages 
the oversubscription well enough that this issue only occurs transiently… (I 
hope).


> 
> One solution would be if ISPs made sure upload is 100% provisioned. Could be 
> cheaper than for (the higher rate) download.

        Not going to happen, in my opinion, as economically unfeasible for a 
publicly traded ISP. I would settle for that approach as long as the ISP is 
willing to fix its provisioning so that oversubscription episodes are 
reasonable rare, though.

> 
>>      Nice paper, but really not a full solution either. Unless the ISPs 
>> cooperate in supplying stable reflectors powerful enough to support all 
>> downstream customers.
> 
> I think that's a valid concern.  Is "TTL Exceeded" rate-limited like Echo 
> (because it may be generated outside the highest-speed forwarding path?), and 
> would this work as tested if everyone did it?

        I thing Daniel agrees and that is why they came up with the “magic” 
packet approach (that drags in its own set of challenges as far as I can see).

> 
>>  But if the ISPs cooperate, I would guess, they could eradicate downstream 
>> buffer bloat to begin with. Or the ISPs could have the reflector also add 
>> its own UTC time stamp which would allow to dissect the RTT into its 
>> constituting one-way delays to detect the currently bloated direction. 
>> (Think ICMP type 13/14 message pairs "on steroids", with higher resolution 
>> than milliseconds, but for buffer bloat detection ms resolution would 
>> probably be sufficient anyways). Currently, I hear that ISP equipment will 
>> not treat ICMP requests with priority though.
>>      Also I am confused what they actually simulated: “The modems and CMTS 
>> were equipped with ASQM, CoDel and PIE,” and “However, the problem pop- 
>> ularly called bufferbloat can move about among many queues some of which are 
>> resistant to traditional AQM such as Layer 2 MAC protocols used in cable/DSL 
>> links. We call this problem bufferbloat displacement.” seem to be slightly 
>> at odds. If modems and CTMS have decent AQMs all they need to do is not 
>> stuff their sub-IP layer queuesand be done with it. The way I understood the 
>> cable labs PIE story, they intended to do exactly that, so at least the 
>> “buffer displacement” remedy by ASQM reads a bit like a straw man argument. 
>> But as I am a) not of the cs field, and b) only glossed over the paper, most 
>> likely I am missing something important that is clearly in the paper...
>> 
>> Best Regards
>>      Sebastian
> 
> I had your reaction about pie on the modem.
> 
> We could say there is likely room for improvement in any paper, that claims 
> bufferbloat eliminated with a "target" parameter of 100ms :p.  Results don't 
> look that bad (why?) but I do see 25ms bloat v.s. codel/pie.  It may be 
> inevitable but deserves not to be glossed over with comparisons to the 
> unrelated 100ms default parameter of codel, which in reality is the one 
> called "interval" not "target" :).  Good QM on the modem+cmts has got to be 
> the best solution.

        I fully agree. I have a hunch that their method might be used to 
supplement docsis 3.1 pie so that the CPEs can also meaningfully measure and 
control downstream buffer bloat in addition to the upstream without the need to 
fix the CMTSs. As far as I understand cable labs are quite proactive in trying 
to fix this in CPE’s  while I have heard nothing about the CMTS manufacturers’ 
plans (I think the Arris paper was about CPEs not CMTS). Maybe cable labs could 
be convinced to try this in addition to upstream PIE as a solution that will 
require no CMTS involvement… (I simple assume that the CMTS does not need to 
cooperate, but note that the paper seems to rely totally on simulated data, in 
so far as linux pc’s where used to model each of the network components. So "no 
real CMTS was harmed during the making of this paper")

Best Regards
        Sebastian




> 
> Alan

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to