I'm trying to create a relatively simple traffic shaping environment;
basically, we're a home network with three different classes of traffic:
1) High-priority, which means usually low bandwidth but very demanding
latency requirements. In English: online gaming. :)
2) Medium-priority, which includes most of what people think of as
normal Internet traffic. Web browsing, email, USENET, IRC, etc.
3) Low-priority, which includes bulk traffic like big file downloads.
Basically, the cake that I'm trying to have and eat it too is where we
can be running a bunch of stuff like BitTorrent clients to download new
Quake maps, and still be playing Battlefield 1942 without getting
hammered on by the P2P clients' data transfers and node-building
traffic.
This is what I have so far; it has made a definite improvement for
prio 1 traffic (the medium stuff, web browsing and such) but doesn't
seem to be enough; online gaming is still quite laggy while the file
transfers and such are active. At this point it seems that what I
basically need to do is tweak the values of the $UPSTREAM_* variables,
but I thought I might ask here first to see if there's an entire
design-level improvement to be made.
The basic idea is that medium traffic should be able to stomp on
low traffic (represented by the default case) when it needs
bandwidth/latency, and that high traffic should be able to stomp on
both medium and low when it needs bandwidth/latency...but the lower
classes can borrow bandwidth when the classes that outrank them aren't
using it.
From reading the parts of the HOWTO that I could get my mind around, I
understand that only outbound traffic can be molded, so the script below
makes no attempt to do anything with inbound traffic.
In a tangentally-related question, I'm having some trouble determining
what number I should put for $UPSTREAM_TOTAL. I sort of arrived at 15
by trial and error -- but if anybody has any suggestions on ways to
empirically determine what your upload speed actually is, they would be
most welcome. :)
Oh, one other thing...does u32 match ip [sd]port N match both TCP and
UDP port N, or just TCP? I'm wondering if that may be part of the
problem, since most online games use UDP for the client connections.
Thanks to anyone who takes a look; let me know if there's any more
information from our configuration/setup that would be helpful.
- cut here
#! /bin/sh
if [ $1 = status ] ; then tc -s qdisc ls dev eth0 ; exit 0 ; fi
IP=/bin/ip
TC=/sbin/tc
IPT=/sbin/iptables
IFACE_NET=eth0
## These are numbers in kilobytes per second
UPSTREAM_TOTAL=15
## These next three should add up to _TOTAL
UPSTREAM_HI=9
UPSTREAM_MED=5
UPSTREAM_LO=1
## Interface Maximum Transmission Unit
MTU_NET=1500
PORTS_HI=21 22 23 53 123 5190 5191 5192 5193 5222 5269 8767 14567 14568 14690
PORTS_MED=20 25 80 110 113 119 143 443 6667
###
## Delete old rules
${TC} qdisc del dev ${IFACE_NET} root
## Set MTU
${IP} link set dev ${IFACE_NET} mtu ${MTU_NET}
## Set queue size
${IP} link set dev ${IFACE_NET} qlen 2
## Create root queue discipline
${TC} qdisc add dev ${IFACE_NET} root handle 1:0 htb default 12
## Create root class
${TC} class add dev ${IFACE_NET} parent 1:0 classid 1:1 htb rate ${UPSTREAM_TOTAL}kbps
## Create leaf classes where packets will actually be classified
${TC} class add dev ${IFACE_NET} parent 1:1 classid 1:10 htb prio 0 rate
${UPSTREAM_HI}kbps ceil ${UPSTREAM_TOTAL}kbps
${TC} class add dev ${IFACE_NET} parent 1:1 classid 1:11 htb prio 1 rate
${UPSTREAM_MED}kbps ceil ${UPSTREAM_TOTAL}kbps
${TC} class add dev ${IFACE_NET} parent 1:1 classid 1:12 htb prio 2 rate
${UPSTREAM_LO}kbps ceil ${UPSTREAM_TOTAL}kbps
## Add SFQ for beneath these classes
${TC} qdisc add dev ${IFACE_NET} parent 1:10 handle 10: sfq perturb 10
${TC} qdisc add dev ${IFACE_NET} parent 1:11 handle 11: sfq perturb 10
${TC} qdisc add dev ${IFACE_NET} parent 1:12 handle 12: sfq perturb 10
## Add the filters which direct traffic to the right classes
## High-priority traffic
${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip protocol
1 0xff flowid 1:10 ## ICMP
for PORT in ${PORTS_HI}; do
${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip
dport ${PORT} 0x flowid 1:10
${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip
sport ${PORT} 0x flowid 1:10
done
## Normal traffic
for PORT in ${PORTS_MED}; do
${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip
dport ${PORT} 0x flowid 1:11
${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip
sport ${PORT} 0x flowid 1:11
done
## Bulk traffic is anything not already classified, so comment this line
## out as it's redundant and anyway it generates an error I don't feel
## like debugging yet :)
#${TC} filter add dev ${IFACE_NET}