Andy,

Thanks for your script. I've been looking at it a lot but still can't get it to work in the way I need it to. While the script runs without errors echoed to ssh, it doesn't have the desired effect. The clients have a downlink shaped to 128kbit. However, each of my two test clients will happily download at rates much higher than 128kbit. Client 1 gets a rate of 160kB/s and Client 2 gets a rate of 250kB/s (yes, kilobytes, not kilobit). I have no idea why this is and it's starting to get confusing. I know you mentioned that CPU timing wouldn't be reliable on dual-core CPUs (I have dual core) but you'd think the rate would be out by a factor of 2. Here, I'm getting a factor of 10 for one client and a factor of 16 for the other.

When I actually went to the console of the box earlier, there was loads of output from your script echoed to the console, each one saying that the quantum was too small and I should consider adjusting r2q. I don't really understand what this means.

Any advice gratefully received (from anyone else on the list as well as Andy!)

Cheers,
Jonathan


Andy Furniss wrote:
I managed to have a play - CBQ doesn't seem too accurate it let netperf get throughput of about 180kbit. HTB was OK so I used that.

Below is what I tested - I wouldn't consider it finished because it would probably be nicer to have SFQs on the bulk classes and something shorter on the interactives.

I don't know how much memory this does/could use, if you don't specify child qdiscs htb uses pfifos with a length taken from txqueuelen (1000 on eth) so that adds up to quite a bit. With window scaling on and a netperf running for each IP I managed to backlog >200 packets on each.

Rather than police you could, if using recentish 2.6 use ifb and have the same setup on ingress eth0. Or if you don't do nat on the same box on the wan. If you do do nat and don't have ifb then you need to use netfilter to mark by ip and match the marks.

If the hosts are wireless, then there may be other ways to make things better - not that I have wireless myself, but if there is much packet loss I always thought it would be better to proxy wan and have different MTU/MSS for the wlan - maybe also use one of the tcp congestion controls that's less sensitive to random loss.

It would be more elegant to use tc's hashing but I've not done that before. The filters are nested so only the IP matches see upto all the traffic. I just matched tcp length <128 / not tcp for interactive.

If you want counters for filter hits

tc -s filter ls dev eth0
for top level

tc -s filter ls dev eth0 parent 1:1
for the children

tc -s class ls dev eth0
for loads of htb data - beware the rates use a long average, it takes 100sec for them to be right for me.

Andy

!/bin/sh
#set -x

# Interfaces
LAN=eth0
DOWNLINK=128

# IP range in each subnet
LOW_IP=2
HIGH_IP=254

# Flush existing rules
tc qdisc del dev $LAN root

tc qdisc add dev $LAN root handle 1: htb

# Set useful counter
total=0

# Apply rules for all included subnets
for i in `seq $LOW_IP $HIGH_IP`
do
  total=$((total+1))
  echo 172.19.123.$i
tc class add dev $LAN parent 1: classid 1:$total htb rate ${DOWNLINK}kbit tc class add dev $LAN parent 1:$total classid 1:a$total htb rate 100kbit ceil ${DOWNLINK}kbit prio 0 tc class add dev $LAN parent 1:$total classid 1:b$total htb rate 28kbit ceil ${DOWNLINK}kbit prio 1 tc filter add dev $LAN parent 1: protocol ip prio 1 u32 match ip src 172.19.123.$i flowid 1:$total tc filter add dev $LAN parent 1:$total protocol ip prio 2 u32 match ip protocol 6 0xff match u16 0x0000 0xff80 at 2 flowid 1:a$total tc filter add dev $LAN parent 1:$total protocol ip prio 3 u32 match ip protocol 6 0xff flowid 1:b$total tc filter add dev $LAN parent 1:$total protocol ip prio 4 u32 match u32 0 0 flowid 1:a$total
done





--
------------------------
Jonathan Gazeley
ResNet | Wireless & VPN Team
Information Systems & Computing
University of Bristol
------------------------

_______________________________________________
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

Reply via email to