Ironically, in the UK my cheap ISP, Plusnet used to do QOS for free.
The ASA (Advertising standards authority) decreed that ISPs that mark
traffic can't claim "totally unlimited" in ads - so they turned it off.
You can now pay more to opt into something similar.
It could be of course that there
Andy Furniss wrote:
Been running a cake cobolt from april for some time using
tc qdisc add dev ppp0 handle 1:0 root cake bandwidth 19690kbit raw
overhead 34 diffserv4 dual-srchost nat rtt 200ms
tc -s qdisc ls dev ppp0 (ppp0 is pppoe)
qdisc cake 1: root refcnt 2 bandwidth 19690Kbit diffserv4
Been running a cake cobolt from april for some time using
tc qdisc add dev ppp0 handle 1:0 root cake bandwidth 19690kbit raw
overhead 34 diffserv4 dual-srchost nat rtt 200ms
tc -s qdisc ls dev ppp0 (ppp0 is pppoe)
qdisc cake 1: root refcnt 2 bandwidth 19690Kbit diffserv4 dual-srchost
nat
Dave Taht wrote:
It is my hope to get the cake qdisc into the Linux kernel in the next
release cycle. We could definitely use more testers! The version we
have here will compile against almost any kernel on any platform,
dating back as far as 3.10, and has been integrated into the
sqm-scripts
Mark Captur wrote:
I am using cake on latest lede nightly. I'm using diffserv 4 which creates
4 tins bulk, best effort, video and voice.
Is there a way to change dscp markings on in comming traffic to place it in
te video tin. More specifically i would like all incoming traffic with
source port
Jim Gettys wrote:
On Thu, May 4, 2017 at 6:22 AM, Andy Furniss <adf.li...@gmail.com> wrote:
Jim Gettys wrote:
On Wed, May 3, 2017 at 5:50 AM, Andy Furniss <adf.li...@gmail.com>
wrote:
Andy Furniss wrote:
Andy Furniss wrote:
b) it reacts to increase in RTT. An experiment
Jim Gettys wrote:
On Wed, May 3, 2017 at 5:50 AM, Andy Furniss <adf.li...@gmail.com>
wrote:
Andy Furniss wrote:
Andy Furniss wrote:
b) it reacts to increase in RTT. An experiment with 10 Mbps
bottleneck,
40 ms RTT and a typical 1000 packet buffer, increase in RTT
with BBR is ~3 ms
Pete Heist wrote:
Another option for ISPs (failing AQM support in the devices, and
instead of deploying devices on the customer side), could be to
provide each customer a queue that’s tuned to their link rate. There
could be an HTB tree with classes for each customer and Cake at the
leafs.
Andy Furniss wrote:
Andy Furniss wrote:
b) it reacts to increase in RTT. An experiment with 10 Mbps
bottleneck, 40 ms RTT and a typical 1000 packet buffer, increase
in RTT with BBR is ~3 ms while with cubic it is over 1000 ms.
That is a nice aspect (though at 60mbit hfsc + 80ms bfifo I
Jonathan Morton wrote:
On 1 May, 2017, at 16:03, Andy Furniss <adf.li...@gmail.com>
wrote:
Does google/valve/every bbr user run it?
By definition, yes. Without sch_fq, there is no pacing available for
BBR to use, and pacing is the mode it normally tries to operate in.
Without pacin
Jonathan Morton wrote:
On 1 May, 2017, at 14:32, Andy Furniss <adf.li...@gmail.com>
wrote:
Well it seems distance is important for BBR. It seems to have a
design whereby your rtt to the server determines how badly it will
bork your latency. Unlike cubic it doesn't take loss/ecn as
Dendari Marini wrote:
What's your RTT(ping) to the different services, like Steam and
Windows Update? Some ISPs have local CDNs that can give incredibly
low latency relative to the provisioned bandwidth, which can cause
bad things to happen with TCP.
I tried Battle.net and Steam (manually
Jonathan Morton wrote:
On 29 Apr, 2017, at 18:11, Andy Furniss <adf.li...@gmail.com>
wrote:
With the ingress param shaping at 1mbit 5 tcps (cubic or bbr)
really destroys latency.
With the caveat that my test may be flawed, I am currently
suspecting that cake cobalt head + ingress
Andy Furniss wrote:
Andy Furniss wrote:
b) it reacts to increase in RTT. An experiment with 10 Mbps
bottleneck, 40 ms RTT and a typical 1000 packet buffer, increase in
RTT with BBR is ~3 ms while with cubic it is over 1000 ms.
That is a nice aspect (though at 60mbit hfsc + 80ms bfifo I
Andy Furniss wrote:
OK so ecn may cure - but people may no know how or want that on.
So ecn doesn't really help with upstream bandwidth issues as it still
does 1 ack per packet when marked - though they are at least not any
longer like sacks
Andy Furniss wrote:
Andy Furniss wrote:
My understanding is again that on pppoe devices the kernel adds
zero bytes auto matically and attaching the ifb does not seem to
change that?
The packet size is ip length as seen by cake on ifb redirected
from pppoe - but this time it seems
Andy Furniss wrote:
My understanding is again that on pppoe devices the kernel adds zero
bytes auto matically and attaching the ifb does not seem to change
that?
The packet size is ip length as seen by cake on ifb redirected from
pppoe - but this time it seems the difference is 14 not 22
Sebastian Moeller wrote:
Hi Andy,
On Apr 25, 2017, at 14:58, Andy Furniss <adf.li...@gmail.com>
wrote:
Dendari Marini wrote:
Also I have done some more testing, I was able to limit Steam
connections just to one thanks to some console commands
("@cMaxContentServe
Dendari Marini wrote:
On 25 April 2017 at 21:10, Jonathan Morton
wrote:
You may see some improvement from wholesale reducing the inbound
bandwidth, to say 10Mbit. This is especially true given the high
asymmetry of your connection, which might require dropped acks
Dendari Marini wrote:
Also I have done some more testing, I was able to limit Steam
connections just to one thanks to some console commands
("@cMaxContentServersToRequest" and
"@cCSClientMaxNumSocketsPerHost") and while the situation improved
(no more packet loss, latency variation within
Dendari Marini wrote:
FWIW here's a quick example on ingress ppp that I tested using
connmark the connmarks (1 or 2 or unmarked) being set by iptables
rules on outbound connections/traffic classes.
Unfortunately I'm really not sure how to apply those settings to my
case, it's something I've
Jonathan Morton wrote:
So please add “atm overhead 32" to cake on eth0 or “atm overhead
40” to cake instances on pppoe (these packets do not have the
PPPoE header added yet and hence appear 8 bytes to small).
Thanks for your help, will definitely use them. Just wondering if I
use
Dendari Marini wrote:
Hello, thanks for your reply.
Looks like most of your options are okay, including the correct “dual”
modes and “ingress” mode in the right place. However, I think you need to
adjust your bandwidth and overhead settings, otherwise Cake isn’t reliably
in control of the
Pete Heist wrote:
Cake is not a requirement yet. I like it for several of its
attributes (good performance with high numbers of flows, and also
when “over-limiting”, which I’ll explain more in my next round of
point-to-point WiFi results).
Would be nicer for your users though?
I mean in the
Jonathan Morton wrote:
Also, Cake’s general philosophy of simplifying configuration means that it’s
unlikely to ever support “lists” or “tables” of explicit parameters. This is a
conscious design decision to enable its use by relative non-experts. Arguably,
even some of the existing options
Andy Furniss wrote:
Pete Heist wrote:
Hi, I built the latest cake source from:
https://github.com/dtaht/sch_cake.git
<https://github.com/dtaht/sch_cake.git>
and iproute2 source from:
git://kau.toke.dk/cake/iproute2/
but there’s still an issue with the diffserv keyword when doing ’tc
Jonathan Morton wrote:
Okay, I think I’ve worked out what is happening.
At 250KB/s, it takes 6ms to get one 1500-byte bulk packet down the
pipe. This is unavoidable, so having a bulk flow competing with your
game traffic will always increase your peak latency by that much.
With three
In the UK quite a lot of people have a 40/2 vdsl2 product.
Thankfully not me, ugh, it doesn't even have enough bandwidth for
sack per incoming in recovery - but "pretending" I wanted to see what
cake was like.
tc qdisc add dev enp6s0 handle 1:0 root cake bandwidth 1969230bit
overhead 34
Andy Furniss wrote:
Next test = use vanilla git iproute2, even worse = Oops. So may be
best to avoid that one for now :-).
I manages to avoid the Oops by udating iptables from 1.6.0 to 1.6.1
which is handy, though now it fails with an error from iptables - but
at least it fails without taking
Benjamin Cronce wrote:
I have not sampled YouTube data in a while, but the last time I looked it
had packet-pacing issues. With TCP going from idle to full, several times a
second. Not only do you get the issue that TCP will front-load the entire
TCP window all at once, but if the data being
I am well rusty with linux qos and have never tried dsmark before.
I am likely doing something stupid here :-)
So the test: I want to set dsmark on ingress traffic so I can control
which cake tin it goes to - test just marking icmp as ef.
ingress qdisc is added to ppp0 and redirected to ifb0,
31 matches
Mail list logo