-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
The wondershaper did not work quite the way I wanted it to work, with the
CBQ-wondershaper even giving better results than the HTB-wondershaper. So I
assembled my own little traffic shaper script.
It does already very much of the dirty work and is successful in policing
non-bulk traffic down.
I have a 128 kbit/s uplink and a 768 kbit/s downlink. I do not have to care
much about downloads, as the downlink is fast enough. A more serious problem
is, when the uplink is spammed for example by a larger upload.
My demands are very high in these situations, as it may well be that I am
playing EliteForce, a game using the quake3 engine which needs low latencies.
I am using the HTB qdisc.
Generally, I have put up four classes, the first class is where interactive
traffic gets in and has guaranteed some bandwidth, the second one is for ACK
packets getting their share of uplink to ensure fast downloading while having
a big upload :), the third one is for web requests, like sending HTTP
requests to pages with not much bandwidth and priority and the last one is
the default queue. Every traffic that is not matched by the filter gets into
the default class which has by default almost no rate but may ceil up to 15
kbyte/s, maximum allowed for the link. Same goes for all queues.
This principle works quite well, I have managed to get the latency for the
interactive class well below 150 where it would be > 2000 without traffic
shaper. I regard this already for quite an accomplishment, though it is not
good enough yet for a gamer. I need stable pings, and the pings now are
around 60 and 150. To demonstrate what I mean I give you the ping output:
64 bytes from 2int.de (217.160.128.207): icmp_seq=347 ttl=56 time=128 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=348 ttl=56 time=136 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=349 ttl=56 time=133 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=350 ttl=56 time=143 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=351 ttl=56 time=60.4 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=352 ttl=56 time=63.1 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=353 ttl=56 time=60.9 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=354 ttl=56 time=60.7 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=355 ttl=56 time=65.4 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=356 ttl=56 time=64.5 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=357 ttl=56 time=64.3 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=358 ttl=56 time=72.0 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=359 ttl=56 time=82.1 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=360 ttl=56 time=99.6 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=361 ttl=56 time=99.3 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=362 ttl=56 time=107 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=363 ttl=56 time=127 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=364 ttl=56 time=128 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=365 ttl=56 time=136 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=366 ttl=56 time=136 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=367 ttl=56 time=59.2 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=368 ttl=56 time=61.0 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=369 ttl=56 time=63.3 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=370 ttl=56 time=62.0 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=371 ttl=56 time=91.2 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=372 ttl=56 time=90.1 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=373 ttl=56 time=87.2 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=374 ttl=56 time=86.7 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=375 ttl=56 time=86.5 ms
You get the idea - first it's nice around 60 (this is the default ping without
any upload), but with upload and traffic shaper the ping gets high to 140 and
after a while drops down to 60 again and again to 140 and so forth and so
forth.
Does anyone of you have an idea how I can minimize this effect, and let pings
be stable at 60 ms? stable 80ms delay are okay for me too, no question.
If I let the worst-priority bulkdownload class ceil up only to 10kbyte/s I
have the same effect, only when the max ceil class is put down under 6 i do
not have this changing ping effect.
Here's still my script, if you are interested to look at it.
#!/bin/bash
DEV=ppp0
# delete any qdiscs or rule sets created so far.
tc qdisc del dev $DEV root 2> /dev/null > /dev/null
# tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null
# create the root qdisc
tc qdisc add dev $DEV root handle 1: htb default 13
# install a root class, so that other clients can borrow from each other.
tc class add dev $DEV parent 1: classid 1:1 htb rate 15kbps ceil 15kbps
# now install 4 sub classes for different priorities
# highest priority for low