Hi Jonathan, 

all of this sounds great! One question inlined below…


> On Jun 27, 2016, at 05:56 , Jonathan Morton <chromati...@gmail.com> wrote:
> 
> 
>> On 4 Jun, 2016, at 22:55, Jonathan Morton <chromati...@gmail.com> wrote:
>> 
>> COBALT should turn out to be a reasonable antidote to sender-side cheating, 
>> due to the way BLUE works; the drop probability remains steady until the 
>> queue has completely emptied, and then decays slowly.  Assuming the 
>> congestion-control response to packet drops is normal, BLUE should find a 
>> stable operating point where the queue is kept partly full on average.  The 
>> resulting packet loss will be higher than for a dumb FIFO or a naive ECN 
>> AQM, but lower than for a loss-based AQM with a tight sojourn-time target.
>> 
>> For this reason, I’m putting off drafting such an explanation to Valve until 
>> I have a chance to evaluate COBALT’s performance against the faulty traffic.
> 
> The COBALTified Cake is now working quite nicely, after I located and excised 
> some annoying lockup bugs.  As a side-effect of these fixes (which introduced 
> a third, lightly-serviced flowchain for “decaying flows”, which are counted 
> as “sparse” in the stats report), the sparse and bulk flow counts should be 
> somewhat less jittery and more useful.
> 
> I replaced the defunct “last_len” stat with a new “un_flows”, meaning 
> “unresponsive flows”, to indicate when the BLUE part of COBALT is active.  
> This lights up nicely when passing Steam traffic, which no longer has 
> anywhere near as detrimental effect on my Internet connection as it did with 
> only Codel; this indicates that BLUE’s ECN-blind dropping is successfully 
> keeping the upstream queue empty.  (Of course it wouldn’t help against a UDP 
> flood, but nothing can do that in this topology.)
> 
> While working on this, I also noticed that the triple-isolation logic is 
> probably quite CPU-intensive.

        Does this also affect the dual[src|dst]host isolation options? How do 
you test this option internally (I am trying to solicit testers from the 
openwrt forum, but they are hard to come by and understandably only want to 
spent limited time with testing, so the results so far are tentative at best)

>  It should be feasible to do better, so I’ll have a go at that soon.  Also on 
> the to-do list is enhancing the overhead logic with new data,

        Could I, ask nicely again, that you add something to the keywords that 
will easily signify whether the keyword has a side-effect on the atm 
encapsulation, please? 
        A lot of our users on openwrt/lede only see the output of “tc qdisc add 
cake help” at best and the different scopes of the keywords are simply not easy 
to understand from that. (The “scope” of the keywords could be cmade clearer 
for example by either a pre-/suffix to the keyword names, like in pppoe-ptm or 
by using two word configurations like “adsl-overhead pppoe-vcmux”. I admit that 
both being less visually pleasing and concise than the existing keywords, but 
clearer to our users).



> and adding a three-class Diffserv mode which Dave has wanted for a while.
> 
> I’ve also come up with a tentative experimental setup to test the “85% rule” 
> more robustly than the Chinese paper found recently.  I should be able to do 
> it wth just three hosts, one having dual NICs, and using only Cake and netem 
> qdiscs.
> 
> Now if only the sauna were not the *coolest* part of my residence right now…

        Your are in Finland? I envy you for your nice long days…

Best Regards
        Sebastian

> 
> - Jonathan Morton
> 
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake

_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to