On Thursday 14 April 2005 16:03, Guido Sohne wrote:
> I am also interested in the efficient utilization of local bandwidth. I
> have developed (read: hacked together from various examples) some
> traffic control scripts to help solve this problem but I think that
> there may be more efficient solutions.
>
> The script below is being used to shape traffic on a transparent web
> cache that is accelerating for a bandwidth manager.
Not sure I understand why you would want to accelerate for a bandwidth
manager. Unlikely you are using it as a web server?
> If the cache is
> before the bandwidth manager, then it's hard to control its bandwidth
> except using a single IP address (meaning all customers get the same
> QOS). If it is after the bandwidth manager, then the cache itself is not
> managed ...
Hmmh, not necessarily... in my experience (bandwidth) managing cache access, I
have used a bandwidth manager that has multiple Ethernet interfaces to which
you can attach devices or elements you want to manage. In my case, I have 5
ports, one for backhauling all LAN traffic to/from the Internet and the other
four to manage the devices I need to manage. All 5 interfaces can accept
policies, so it doesn't really matter where you put the policies as traffic
will have to go through the main backhaul port anyway.
In one of the other 4 ports, you place your border router (since it's your
gateway to the Internet), and in one of the remaining 3, place your cache
box. You will most likely be doing port 80 interception (either WCCP, policy
routing, IPTables/IPfw or Layer 4 interception), the method doesn't really
matter.
If you apply your policies on the interface connected to the cache, for your
various networks, your customers will be redirected to your cache box at
whatever speed you have configured for their IP address, and the cache will
respond to them at whatever speed you configured for their IP address on the
same port.
On the port connected to your border router, you add a policy for your cache
box and allocate it the appropriate bandwidth as necessary, in effect,
managing how much bandwidth your cache uses to handle customer requests. You
will also need to add your other networks on this same port to manage traffic
heading straight for the router as the port connected to the cache only gets
to deal with HTTP traffic.
I found this to work for me very well!
Some bandwidth managers are evil, they only come with 2 ports - IN and OUT.
Doesn't offer much flexibility.
Mark.
>
> After this script was deployed, the perceived performance of the network
> has become much better. Does anyone have some experiences to share? Or
> improvements to make :-)
>
> -- G.
>
> > # use 90% of kilobits speed for provider uplinks
> > UPLINK=190
> >
> > # rates for ACKnowledgement, SSH, WWW, UDP/ICMP/DNS, POP/IMAP, P2P
> > traffic respectively
> > ACKR=4
> > SSHR=3
> > WWWR=85
> > UDPR=30
> > POPR=53
> > RSTR=15
> >
> > # peak rates for ACKnowledgement, SSH, WWW, UDP/ICMP/DNS, POP/IMAP
> > traffic respectively
> > ACKC=8
> > SSHC=16
> > WWWC=185
> > UDPC=64
> > POPC=185
> > RSTC=64
> >
> > # clear previous settings
> > tc qdisc del dev eth0 root
> >
> > # create a root handle attached to eth0 using the HTB qdisc and
> > classify packets as classid 70 by default
> > tc qdisc add dev eth0 root handle 1: htb default 70
> >
> > # create a parent class using HTB with a maximum (capped) rate of 95 -
> > 98 % of UPLINK kilobits/s
> > # the extra headroom is needed to shape properly ... ?
> > tc class add dev eth0 parent 1: classid 1:1 htb rate ${UPLINK}kbit
> > burst 2k
> >
> > # create leafnode classes using HTB with a guaranteed minimum of XXXR
> > kilobits/s and a
> > # maximum ceiling of XXXC kilobits/s prioritizing traffic from zero
> > (highest - note that available
> > # bandwidth is offered in order of priority) to 5 (lowest)
> > tc class add dev eth0 parent 1:1 classid 1:10 htb rate ${ACKR}kbit
> > ceil ${ACKC}kbit prio 0
> > tc class add dev eth0 parent 1:1 classid 1:20 htb rate ${SSHR}kbit
> > ceil ${SSHC}kbit prio 1
> > tc class add dev eth0 parent 1:1 classid 1:30 htb rate ${WWWR}kbit
> > ceil ${WWWC}kbit prio 2
> > tc class add dev eth0 parent 1:1 classid 1:40 htb rate ${UDPR}kbit
> > ceil ${UDPC}kbit prio 3
> > tc class add dev eth0 parent 1:1 classid 1:50 htb rate ${POPR}kbit
> > ceil ${POPC}kbit prio 4
> > tc class add dev eth0 parent 1:1 classid 1:60 htb rate ${RSTR}kbit
> > ceil ${RSTC}kbit prio 5
> >
> > # use stochastic fair queing and every 5 seconds, change
> > # which queues get sent (shake things around)
> > tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10
> > tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
> > tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10
> > tc qdisc add dev eth0 parent 1:40 handle 40: sfq perturb 10
> > tc qdisc add dev eth0 parent 1:50 handle 50: sfq perturb 10
> > tc qdisc add dev eth0 parent 1:60 handle 60: sfq perturb 10
> >
> > # this section marks the packets into classes
> > # OUTGOING TRAFFIC (UPLINK)
> > # give "overhead" packets highest priority
> >
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --syn -m length
> > --length 40:68 -j CLASSIFY --set-class 1:10
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL
> > SYN,ACK -m length --length 40:68 -j CLASSIFY --set-class 1:10
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL ACK
> > -m length --length 40:100 -j CLASSIFY --set-class 1:10
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL RST
> > -j CLASSIFY --set-class 1:10
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL
> > ACK,RST -j CLASSIFY --set-class 1:10
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL
> > ACK,FIN -j CLASSIFY --set-class 1:10
> >
> > # interactive SSH traffic
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --sport ssh -m length
> > --length 40:100 -j CLASSIFY --set-class 1:20
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport ssh -m length
> > --length 40:100 -j CLASSIFY --set-class 1:20
> >
> > # interactive web traffic
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp -m multiport --sport
> > http,https,8080,3128 -j CLASSIFY --set-class 1:30
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp -m multiport --dport
> > http,https,8080,3128 -j CLASSIFY --set-class 1:30
> >
> > # smtp traffic and ssh bulk traffic
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --sport 25 -j
> > CLASSIFY --set-class 1:50
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport 25 -j
> > CLASSIFY --set-class 1:50
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --sport ssh -m length
> > --length 101: -j CLASSIFY --set-class 1:50
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport ssh -m length
> > --length 101: -j CLASSIFY --set-class 1:50
> >
> > # ICMP, UDP and DNS lookups
> > iptables -t mangle -A POSTROUTING -o eth0 -p icmp -m length --length
> > 28:1500 -m limit --limit 3/s --limit-burst 4 -j CLASSIFY --set-class 1:40
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport domain -j
> > CLASSIFY --set-class 1:40
> > iptables -t mangle -A POSTROUTING -o eth0 -p udp --dport domain -j
> > CLASSIFY --set-class 1:40
> > iptables -t mangle -A POSTROUTING -o eth0 -p udp -j CLASSIFY
> > --set-class 1:40
> >
> > # email traffic
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp -m multiport --sport
> > pop3,pop3s,imap2,imap3,imaps -j CLASSIFY --set-class 1:50
> > iptables -t mangle -A POSTROUTING -o eth0 -p udp -m multiport --sport
> > pop3,pop3s,imap2,imap3,imaps -j CLASSIFY --set-class 1:50
> > iptables -t mangle -A POSTROUTING -o eth0 -p tcp -m multiport --dport
> > pop3,pop3s,imap2,imap3,imaps -j CLASSIFY --set-class 1:50
> > iptables -t mangle -A POSTROUTING -o eth0 -p udp -m multiport --dport
> > pop3,pop3s,imap2,imap3,imaps -j CLASSIFY --set-class 1:50
> >
> > # peer to peer traffic
> > iptables -t mangle -A PREROUTING -p tcp -j CONNMARK --restore-mark
> > iptables -t mangle -A PREROUTING -p tcp -m mark ! --mark 0 -j ACCEPT
> > iptables -t mangle -A PREROUTING -p tcp -m ipp2p --ipp2p -j MARK
> > --set-mark 1
> > iptables -t mangle -A PREROUTING -p tcp -m mark --mark 1 -j CONNMARK
> > --save-mark
> >
> > iptables -t mangle -A PREROUTING -p udp -m ipp2p --ipp2p -j MARK
> > --set-mark 1
> > iptables -t mangle -A POSTROUTING -o eth0 -m mark --mark 1 -j CLASSIFY
> > --set-class 1:60
>
> Mark Tinka wrote:
> >IMHO, an ISP can't provide hosting services *correctly* if the others are
> >doing it cheaply (read: incorrectly). The problem is, doing it this way
> > means you are probably providing a service at or above cost - when you
> > start wondering where all your bandwidth is going each day, and can't
> > find it, well, don't look too far.
> >
> >This is going to become critical in light of the IXP and the efficient
> >utilization of local bandwidth.
_______________________________________________
LUG mailing list
[email protected]
http://kym.net/mailman/listinfo/lug
%LUG is generously hosted by INFOCOM http://www.infocom.co.ug/