Re: [LARTC] TCP window based shaping
Ed Wildgoose wrote: i'm just trying to slow the traffic (it's for my master|degree|pedigree thesis, so i don't want to waste all my life on this) without changing the window size. How do we fit this thing into the linux QOS architecture anyway? i'm writing a scheduler that just delay the ack rate (it's in a very preliminar state, so nearly nothing was done). now i'm looking for a place where to put the flow information (in a conntrack module, maybe?) Have a look at the BWMGR qos product. They have some interesting thoughts. They seem to have some strange ones too - from http://www.etinc.com/bwcompare.htm "Suppose you have a T1 line and a single server. Suppose that 2 remote clients request the same page simultaneously, and that page has 15,000 bytes of information. With a typical TCP window of 16K, the entire page will be sent without requiring an ACK from the client." Conviniently forgets that slowstart exists - stopping the above is what it's for isn't it? Then later - "What is important to understand is that bandwidth management devices which do not utilize window manipulation at all cannot reduce your network latency on an overall basis. They will simply shift the latency from one type of traffic to another." They are still talking about egress shaping here - I say - so what if latency gets shifted, as long as you arrange for it to be "shifted" to bulk then it doesn't matter. Egress shaping is sorted without window manipulation - As long as whatever you define as interactive traffic is < your bandwidth then you can arrange for it never to be delayed by bulk traffic any more than the bitrate X length of bulk packet. Basically their idea seems to be that you only need to get the window shaping (or ACK shaping) roughly right. The fine tuning happens just as now with the queue simply filling up a little. Seems to me that this is right, if you just get the window even +/- 50% of the target bandwidth then you can do fine tuning by delaying ACK and buffering data. The trick is basically to avoid the huge splurge of data during slow start which can cause queuing on the ISP end. It would be nice to slow slowstart like this, to some extent linux TCP already does this (well it does at LAN speed, it advertises a smaller window then grows it). Not much help for WAN rates or forwarded Windows traffic though. Otherwise I am broadly speaking very happy with the default QOS. It's just this queueing which occurs when a bunch of connections all start together which is the problem. This isn't really just a bittorrent issue though because a busy webserver would likely see the same conditions? I don't get the webserver bit - "heavy" browsing on 512kbit link hurts - typically 4 simultaneous connections over and over. Sortable by sending new connections to a short, lowish rate sfq (tweaked for ingress). But only if there is no other bulk aswell. If there is then I think you need some sort of prediction/intelligence or a dumb queue will react too late and over aggressivly - resync bursts aswell then. Are we all on the same page as to what the problem is? Any more thoughts on how to tackle it? I'm still not convinced that delaying ACK's is really any better than the current option to buffer incoming data. I suppose the gain for marco is that he will not just be delaying acks - OK he could do the same for data and I agree that there isn't much difference and it could be better to work with date in some ways eg. you can really drop whereas with acks it would be harder to simulate a drop. I don't think he will be just buffering though - thats the advantage for him - to build in intelligence to handling the problem of shaping from the wrong end of a bottleneck without the handicap of using dumb queues that are seeing traffic that's allready shaped by a fifo and whose fill rate is determined by what percentage of the fifo rate you set their rate. I still think that dumb queues can be improved for this situation, though - Making an htb/hfsc class that behaves as full as soon as it sees traffic, so other classes that are sharing get throttled before it's too late. Making an eesfq - preferably without the s - perturb is horrible really, the packet reordering alone causes resync bursts in tests I've done. It is also pretty pointless dropping a whole bunch packets from the same window, one per slot with some timed immunity would be nice - they already used the link anyway. That is more like sfred or fred I suppose. Being able to detect existing connections that go back into slowstart - don't know how to do that as such, but I suspect bittorrent causes this as it cycles through existing connections trying to find better for you. De piggybacking acks from full duplex bulk traffic - again bittorrent, though you can make the effects of bt using full duplex for bulk a bit better by using shorter egress queues for it. I guess the receiving machine TCP stack gets it earlie
Re: [LARTC] TCP window based shaping
I did notice that there was a netfilter module which did window tracking. It's in POM under the fairly experimental section I think. Haven't checked it out, but there could be some interesting code in there Ed W ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] priorizing vlans in a bridge
[EMAIL PROTECTED] wrote: Hi, this is my Linux Box LAN 1 -|--eth1 <---br1--->eth0.1 | | \| |eth0--|- 802.1q tagged 1 Mbps link | /| LAN 2 -|--eth2 <---br2--->eth0.2 | I have to bridge the 2 lans in the left side of the diagram with my linux box running as a bridge. I have to tag the traffic of each lan so I created the 2 vlans interfaces on eth0 (tag 1 and tag 2). All works fine. But now I have to priorize LAN1 traffic so it leaves the bridge before LAN2. Also I need to shape the traffic to the 1 Mbps link. I read about the "prio" qdisc but it honours the TOS field of the IP packets, and I don't want to unless it was really necessary. I read about the "prio" option of the htb qdisc and made some scripts, but they don't work as I expected. Prio in htb setup like this only really affects the borrowing of excess - rate is guaranteed. Also you need to back off a bit from your link speed to allow for overheads. HTB script: tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 1000kbit tc class add dev eth0 parent 1:1 classid 1:11 htb rate 500kbit ceil 1000kbit prio 1 I would use something like rate 850kbit ceil 900kbit here tc filter add dev eth0 parent 1: prio 1 protocol ip handle 1 fw classid 1:11 iptables -t mangle -A PREROUTING -i eth1 -j MARK --set-mark 1 tc class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 800kbit prio 2 and rate 50kbit ceil 900kbit here. Andy. tc filter add dev eth0 parent 1: prio 1 protocol ip handle 2 fw classid 1:12 iptables -t mangle -A PREROUTING -i eth2 -j MARK --set-mark 2 What do you suggest me? Thanks in advance. ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] Help!!! Bandwith Control with a NAT machine
Miguel Ángel Domínguez Durán wrote: Hello everyone, First of all, sorry for my poor english. I've been working with this for a few weeks and I'm getting sick... I'm trying to control the bandwith in my network using the following script. The machine where the script is running makes NAT, eth0 is connected to the router and eth1 is connected to the Lan. When I run the script it doesn't appear any errors, i have recompiled a Red Hat kernel 2.4.20, check all the options right and installed iproute2-2.6.9. The result is that every packet is sent to the default queue and I can't understand why. It seems like iptables is not marking any of the packets, all the queues and classes are empty, traffic always goes through default queues in uplink and downlink. Here is the script, which is a modification of some things i've found in the net: #!/bin/bash # # DEV1=eth1 #salida a red local DEV0=eth0 #salida a internet # TC=/usr/sbin/tc if [ "$1" = "status" ] then echo "Enlace descendente" echo "[qdisc]" $TC -s qdisc show dev $DEV1 echo "[class]" $TC -s class show dev $DEV1 echo "[filter]" $TC -s filter show dev $DEV1 echo "Enlace ascendente" echo "[qdisc]" $TC -s qdisc show dev $DEV0 echo "[class]" $TC -s class show dev $DEV0 echo "[filter]" $TC -s filter show dev $DEV0 # echo "[iptables]" # iptables -t mangle -L MYSHAPER-OUT -v -x 2> /dev/null # iptables -t mangle -L MYSHAPER-IN -v -x 2> /dev/null exit fi # Reset everything to a known state (cleared) $TC qdisc del dev $DEV0 root2> /dev/null > /dev/null $TC qdisc del dev $DEV1 root2> /dev/null > /dev/null iptables -t mangle -D PREROUTING -i $DEV0 -j MYSHAPER-OUT 2> /dev/null > /dev/null iptables -t mangle -F MYSHAPER-OUT 2> /dev/null > /dev/null iptables -t mangle -X MYSHAPER-OUT 2> /dev/null > /dev/null iptables -t mangle -D PREROUTING -i $DEV1 -j MYSHAPER-IN 2> /dev/null > /dev/null iptables -t mangle -F MYSHAPER-IN 2> /dev/null > /dev/null iptables -t mangle -X MYSHAPER-IN 2> /dev/null > /dev/null #iptables -t mangle -D PREROUTING -i $DEV0 -j MYSHAPER-IN 2> /dev/null > /dev/null if [ "$1" = "stop" ] then echo "Shaping removed on $DEV1." echo "Shaping removed on $DEV0." exit fi ### # # Inbound Shaping (limits total bandwidth to 1000Kbps) If you have 1mbit up and down you need to back off a bit from this (ceils) - upstream to allow for link overheads - how much depending on type of link. Downstream depends on how much you care about latency, as a start say 15-20%, you need to do this to have a queue at all. # Este es el enlace descendente, desde internet hacia la red interna de Cherrytel # set queue size to give latency of about 2 seconds on low-prio packets ip link set dev $DEV1 qlen 30 Makes no difference - if you use sfq you can change a define in the source or use esfq and specify. # changes mtu on the outbound device. Lowering the mtu will result # in lower latency but will also cause slightly lower throughput due # to IP and TCP protocol overhead. ip link set dev $DEV1 mtu 1000 If I had 1meg symmetrical I doubt I would bother - If you really care that much about latency there are other things to do first. If you do run low MTU I would specify it as quantum for htb and sfq aswell. # add HTB root qdisc $TC qdisc add dev $DEV1 root handle 1: htb default 37 # add main rate limit classes $TC class add dev $DEV1 parent 1: classid 1:1 htb rate 1000kbit # add leaf classes - We grant each class at LEAST it's "fair share" of bandwidth. #this way no class will ever be starved by another class. Each #class is also permitted to consume all of the available bandwidth #if no other classes are in use. $TC class add dev $DEV1 parent 1:1 classid 1:20 htb rate 64kbit ceil 1000kbit $TC class add dev $DEV1 parent 1:1 classid 1:21 htb rate 64kbit ceil 1000kbit $TC class add dev $DEV1 parent 1:1 classid 1:22 htb rate 64kbit ceil 1000kbit $TC class add dev $DEV1 parent 1:1 classid 1:37 htb rate 832kbit ceil 1000kbit #por defecto $TC class add dev $DEV1 parent 1:1 classid 1:23 htb rate 64kbit ceil 64kbit #prueba, maq WiFi # attach qdisc to leaf classes - here we at SFQ to each priority class. SFQ insures that #within each class connections will be treated (almost) fairly. $TC qdisc add dev $DEV1 parent 1:20 handle 20: sfq perturb 10 $TC qdisc add dev $DEV1 parent 1:21 handle 21: sfq perturb 10 $TC qdisc add dev $DEV1 parent 1:22 handle 22: sfq perturb 10 $TC qdisc add dev $DEV1 parent 1:37 handle 37: sfq perturb 10 $TC qdisc add dev $DEV1 parent 1:23 handle 23: sfq perturb 10 # filter traffic into classes by fwmark - here we direct traffic into priority class according to # the fwmark set on the pack
Re: [LARTC] priorizing vlans in a bridge
If I use the vlans interfaces eth0.1 and eth0.2 in the tc statements, I would have two independent root qdiscs on each interface, and this won´t let me prioritize LAN1 over LAN2. That's why I set this up all on eth0, which is the interface that sees all the entire traffic. Any other idea? Thanks for your answers Mensaje citado por "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>: > > Hi, > > You should use eth0.1 and eth0.2 in your tc statements ... > > ciao > > charles > > > On Thu, 2005-02-10 at 23:05, [EMAIL PROTECTED] wrote: > > Hi, this is my Linux Box > > > > > >LAN 1 -|--eth1 <---br1--->eth0.1 | > > | \| > > |eth0--|- 802.1q tagged 1 Mbps link > > | /| > >LAN 2 -|--eth2 <---br2--->eth0.2 | > > > > > > I have to bridge the 2 lans in the left side of the diagram with my linux > box > > running as a bridge. I have to tag the traffic of each lan so I created the > 2 > > vlans interfaces on eth0 (tag 1 and tag 2). > > All works fine. But now I have to priorize LAN1 traffic so it leaves the > bridge > > before LAN2. Also I need to shape the traffic to the 1 Mbps link. > > > > I read about the "prio" qdisc but it honours the TOS field of the IP > packets, > > and I don't want to unless it was really necessary. > > I read about the "prio" option of the htb qdisc and made some scripts, but > they > > don't work as I expected. > > > > HTB script: > > > > tc qdisc add dev eth0 root handle 1: htb > > tc class add dev eth0 parent 1: classid 1:1 htb rate 1000kbit > > > > tc class add dev eth0 parent 1:1 classid 1:11 htb rate 500kbit ceil > 1000kbit prio 1 > > tc filter add dev eth0 parent 1: prio 1 protocol ip handle 1 fw classid > 1:11 > > iptables -t mangle -A PREROUTING -i eth1 -j MARK --set-mark 1 > > > > tc class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 800kbit > prio 2 > > tc filter add dev eth0 parent 1: prio 1 protocol ip handle 2 fw classid > 1:12 > > iptables -t mangle -A PREROUTING -i eth2 -j MARK --set-mark 2 > > > > What do you suggest me? > > Thanks in advance. > > ___ > > LARTC mailing list / LARTC@mailman.ds9a.nl > > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ > > ___ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ > ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
RE: [LARTC] Redundant Links Routing
Maybe a better way of asking this question is if I use link load balancing and one of the links fail would the remaining link continue to function normally without interruption to service. The second question is concerning the differences in link speeds will it degrade performance in comparison to only using the high speed link? Looking through the HOWTO and searching the archives I have not found the "what if" answers. Thanks in advance, Doug -Original Message- From: Doug Johnson [mailto:[EMAIL PROTECTED] Sent: Thursday, February 10, 2005 5:29 PM To: 'lartc@mailman.ds9a.nl' Subject: [LARTC] Redundant Links Routing I have two LANS connected by two different pipes: -- LAN 1 --|--eth0 eth1-|-- VPN link --|-eth1 eth0--|-- LAN 2 | | | | | | | | | | | | |eth2-|-- T1 Link --|-eth2| -- VPN - Link over the Internet 3mb Fiber T1 - Fractional - equaling 256k bandwidth I am looking to have a dominant route between LANS over the VPN link. The T1 Link is a backup. My questions are the following: If I do a split access between the VPN and T1 links giving weight of a 1 to the VPN and a weight of 2 to the T1 link and I should loose either connection am I correct to say that I will still suffer loss of connections. I am assuming that the weighting option being round robin is not ideal for redundant setups. Is it primarily for load balancing? Am I wrong here? Second question, would it be a better solution to tie the links together for load sharing as chapter 10 states in the Adv-Routing-HOWTO? My problem is not load, but routing for unlike redundant lines. I could care less if anything goes across the T1 link unless the VPN link goes down. The idea is to have the Linux boxes eliminate manual re-routing in the event of a link failure. Maybe I have missed the ideal setup altogether. Any point in the right direction would be a great help. Thanks much in advance, Doug ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] TCP window based shaping
i'm just trying to slow the traffic (it's for my master|degree|pedigree thesis, so i don't want to waste all my life on this) without changing the window size. How do we fit this thing into the linux QOS architecture anyway? i'm writing a scheduler that just delay the ack rate (it's in a very preliminar state, so nearly nothing was done). now i'm looking for a place where to put the flow information (in a conntrack module, maybe?) Have a look at the BWMGR qos product. They have some interesting thoughts. Basically their idea seems to be that you only need to get the window shaping (or ACK shaping) roughly right. The fine tuning happens just as now with the queue simply filling up a little. Seems to me that this is right, if you just get the window even +/- 50% of the target bandwidth then you can do fine tuning by delaying ACK and buffering data. The trick is basically to avoid the huge splurge of data during slow start which can cause queuing on the ISP end. Otherwise I am broadly speaking very happy with the default QOS. It's just this queueing which occurs when a bunch of connections all start together which is the problem. This isn't really just a bittorrent issue though because a busy webserver would likely see the same conditions? Are we all on the same page as to what the problem is? Any more thoughts on how to tackle it? I'm still not convinced that delaying ACK's is really any better than the current option to buffer incoming data. I guess the receiving machine TCP stack gets it earlier so the app looks more responsive, but other than the lower lag I don't see much difference really? Curious to hear how your project gets on though! Please keep us informed! Ed W ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] load sharing
On Fri, 11 Feb 2005 12:39:26 -0500, Payal Rathod <[EMAIL PROTECTED]> wrote: > Hi, > I am taking my machine loaded with Mandrake 10.0 and Suse 9.1 to my > friend's place. She has 2 different ISP providing pppoe connections > in her office. She has allowed me try load balancing on my machine on > Sunday. I just wanted to know does lartc stand good with pppoe? I > have heard conflicting opinions on the same. I do not want to patch > my kernel at all for it. Is it possible with my current system? I'm no expert on any LARTC topics, but I can tell you that I've done all my LARTC excersises using stock Fedora Core (1 -3) and PPPoE connections (be it DSL or Wireless). Everything has work perfectly once I got past the initial struggling block, myself :) > > With warm regards, > -Payal > > ___ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ > -- Kenneth Kalmer [EMAIL PROTECTED] http://opensourcery.blogspot.com ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[LARTC] load sharing
Hi, I am taking my machine loaded with Mandrake 10.0 and Suse 9.1 to my friend's place. She has 2 different ISP providing pppoe connections in her office. She has allowed me try load balancing on my machine on Sunday. I just wanted to know does lartc stand good with pppoe? I have heard conflicting opinions on the same. I do not want to patch my kernel at all for it. Is it possible with my current system? With warm regards, -Payal ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] priorizing vlans in a bridge
Hi, You should use eth0.1 and eth0.2 in your tc statements ... ciao charles On Thu, 2005-02-10 at 23:05, [EMAIL PROTECTED] wrote: > Hi, this is my Linux Box > > >LAN 1 -|--eth1 <---br1--->eth0.1 | > | \| > |eth0--|- 802.1q tagged 1 Mbps link > | /| >LAN 2 -|--eth2 <---br2--->eth0.2 | > > > I have to bridge the 2 lans in the left side of the diagram with my linux box > running as a bridge. I have to tag the traffic of each lan so I created the 2 > vlans interfaces on eth0 (tag 1 and tag 2). > All works fine. But now I have to priorize LAN1 traffic so it leaves the > bridge > before LAN2. Also I need to shape the traffic to the 1 Mbps link. > > I read about the "prio" qdisc but it honours the TOS field of the IP packets, > and I don't want to unless it was really necessary. > I read about the "prio" option of the htb qdisc and made some scripts, but > they > don't work as I expected. > > HTB script: > > tc qdisc add dev eth0 root handle 1: htb > tc class add dev eth0 parent 1: classid 1:1 htb rate 1000kbit > > tc class add dev eth0 parent 1:1 classid 1:11 htb rate 500kbit ceil 1000kbit > prio 1 > tc filter add dev eth0 parent 1: prio 1 protocol ip handle 1 fw classid 1:11 > iptables -t mangle -A PREROUTING -i eth1 -j MARK --set-mark 1 > > tc class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 800kbit > prio 2 > tc filter add dev eth0 parent 1: prio 1 protocol ip handle 2 fw classid 1:12 > iptables -t mangle -A PREROUTING -i eth2 -j MARK --set-mark 2 > > What do you suggest me? > Thanks in advance. > ___ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[LARTC] Help!!! Bandwith Control with a NAT machine
Hello everyone, First of all, sorry for my poor english. I've been working with this for a few weeks and I'm getting sick... I'm trying to control the bandwith in my network using the following script. The machine where the script is running makes NAT, eth0 is connected to the router and eth1 is connected to the Lan. When I run the script it doesn't appear any errors, i have recompiled a Red Hat kernel 2.4.20, check all the options right and installed iproute2-2.6.9. The result is that every packet is sent to the default queue and I can't understand why. It seems like iptables is not marking any of the packets, all the queues and classes are empty, traffic always goes through default queues in uplink and downlink. Here is the script, which is a modification of some things i've found in the net: #!/bin/bash## DEV1=eth1 #salida a red localDEV0=eth0 #salida a internet # TC=/usr/sbin/tc if [ "$1" = "status" ]then echo "Enlace descendente" echo "[qdisc]" $TC -s qdisc show dev $DEV1 echo "[class]" $TC -s class show dev $DEV1 echo "[filter]" $TC -s filter show dev $DEV1 echo "Enlace ascendente" echo "[qdisc]" $TC -s qdisc show dev $DEV0 echo "[class]" $TC -s class show dev $DEV0 echo "[filter]" $TC -s filter show dev $DEV0 # echo "[iptables]"# iptables -t mangle -L MYSHAPER-OUT -v -x 2> /dev/null# iptables -t mangle -L MYSHAPER-IN -v -x 2> /dev/null exitfi # Reset everything to a known state (cleared)$TC qdisc del dev $DEV0 root 2> /dev/null > /dev/null$TC qdisc del dev $DEV1 root 2> /dev/null > /dev/nulliptables -t mangle -D PREROUTING -i $DEV0 -j MYSHAPER-OUT 2> /dev/null > /dev/nulliptables -t mangle -F MYSHAPER-OUT 2> /dev/null > /dev/nulliptables -t mangle -X MYSHAPER-OUT 2> /dev/null > /dev/nulliptables -t mangle -D PREROUTING -i $DEV1 -j MYSHAPER-IN 2> /dev/null > /dev/nulliptables -t mangle -F MYSHAPER-IN 2> /dev/null > /dev/nulliptables -t mangle -X MYSHAPER-IN 2> /dev/null > /dev/null #iptables -t mangle -D PREROUTING -i $DEV0 -j MYSHAPER-IN 2> /dev/null > /dev/null if [ "$1" = "stop" ]then echo "Shaping removed on $DEV1." echo "Shaping removed on $DEV0." exitfi # Inbound Shaping (limits total bandwidth to 1000Kbps)# Este es el enlace descendente, desde internet hacia la red interna de Cherrytel # set queue size to give latency of about 2 seconds on low-prio packetsip link set dev $DEV1 qlen 30 # changes mtu on the outbound device. Lowering the mtu will result# in lower latency but will also cause slightly lower throughput due# to IP and TCP protocol overhead.ip link set dev $DEV1 mtu 1000 # add HTB root qdisc$TC qdisc add dev $DEV1 root handle 1: htb default 37 # add main rate limit classes$TC class add dev $DEV1 parent 1: classid 1:1 htb rate 1000kbit # add leaf classes - We grant each class at LEAST it's "fair share" of bandwidth.# this way no class will ever be starved by another class. Each# class is also permitted to consume all of the available bandwidth# if no other classes are in use.$TC class add dev $DEV1 parent 1:1 classid 1:20 htb rate 64kbit ceil 1000kbit $TC class add dev $DEV1 parent 1:1 classid 1:21 htb rate 64kbit ceil 1000kbit $TC class add dev $DEV1 parent 1:1 classid 1:22 htb rate 64kbit ceil 1000kbit $TC class add dev $DEV1 parent 1:1 classid 1:37 htb rate 832kbit ceil 1000kbit #por defecto $TC class add dev $DEV1 parent 1:1 classid 1:23 htb rate 64kbit ceil 64kbit #prueba, maq WiFi # attach qdisc to leaf classes - here we at SFQ to each priority class. SFQ insures that# within each class connections will be treated (almost) fairly.$TC qdisc add dev $DEV1 parent 1:20 handle 20: sfq perturb 10$TC qdisc add dev $DEV1 parent 1:21 handle 21: sfq perturb 10$TC qdisc add dev $DEV1 parent 1:22 handle 22: sfq perturb 10$TC qdisc add dev $DEV1 parent 1:37 handle 37: sfq perturb 10 $TC qdisc add dev $DEV1 parent 1:23 handle 23: sfq perturb 10 # filter traffic into classes by fwmark - here we direct traffic into priority class according to# the fwmark set on the packet (we set fwmark with iptables# later). Note that above we've set the default priority# class to 1:37 so unmarked packets (or packets marked with# unfamiliar IDs) will be defaulted to the lowest priority# class.$TC filter add dev $DEV1 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20$TC filter add dev $DEV1 parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21$TC filter add dev $DEV1 parent
[LARTC] SNAT and multiply real addresses ?
hi, I have a real networks on the eth0 side and real network on the eth1 side. a.a.a.0/24 x.x.x.0/24 y.y.y.2/24 <> y.y.y.1/24 <===>INTERNET z.z.z.0/24 I want to nat those behind eth0 to go out as y.y.y.0/24 (eth1 is with another address different gw and address, so that i'm using eth1:0 and separate rule&table) I'm currently tring to do it this way : ifconfig eth1:0 y.y.y.2 netmask 255.255.255.0 ip route add default via y.y.y.1 table eth10-net ip rule from x.x.x.0/24 lookup eth10-net iptables -t nat -A POSTROUTING -s x.x.x.0/24 -j SNAT --to-source y.y.y.3-y.y.y.254 doesn't seem to work.. the problem is that the eth1 interface have y.y.y.2 but not the all the addresses i need to have on eth1 interface... Probably I can set ~250 eth1 aliases but this will be overkill. ?!?! Is there any other solution...!?!? I can do also : iptables -t nat -A POSTROUTING -s x.x.x.Z -j SNAT --to-source y.y.y.Z and it works, but then again this is one IP scenario ? I dont have access to y.y.y.1/24 device. - http://linuxtoday.com/news_story.php3?ltsn=2004-12-08-004-32-OS-BZ-DT-0005 snip> MS Office is popular in the same way as heart disease is the most popular way to die. ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Re: [LARTC] TCP window based shaping
On Fri, Feb 11, 2005 at 02:57:59AM +, Ed Wildgoose wrote: > > I like the sound of this idea, but I don't follow the details? > > Certainly it seems to me that you can do most of the work by only > looking at outgoing ACK packets. For example with certain assumptions > we can simply measure the outgoing ACK rate, assume this is dependent on > the amount of data being controlled by our bandwidth throttling, and > therefore we get a really good estimate of the effect of our current > incoming rate. However, this breaks down if for example the sender was > not sending data as fast as possible. we have to estimate the Acknowledge data rate and if the sender is overlimit we can begin to slowdown his traffic. or something similar... > Also simply delaying ACK's doesn't seem to be the whole answer because > the sender should simply see this as a longer RTT and increase the > window size to keep more data in transit. Seems that we need to do a > little of both, eg examine outgoing ACK speed, reduce the window to the > approx correct size and then our RED/tail drop takes care of the fine tuning hehe, right. and if we are lucky, there will be a router in the internet that will 'taildrop' that traffic for us, so we are not wasting our bandwith. (i didn't tested this algorithm, but in a network simulator (ns) it works). > As others have said, it's stuff like Bittorrent which really shows the > weaknesses in the current system. I find that even throttling > bittorrent to say, half the incoming bandwidth still shows regular > increases in latency, no doubt to the effects of the sudden rush of > incoming connections. (or slow start effects basically). i'll do a lot of testing on this.. bittorrent will be a pain, i guess. > In the BWMGR product (or whatever it is called), I get the impression > they do more work on controlling initial windows to try and throttle > slow start back some? Seems to me that one could do more work around > the time of the initial ACK to get a window size more in keeping with > the flow for that tcp class? If we only allocate 10Kb of our connection > to that class, and we are connected via some broadband device, then > nowhere in the world is more than 350ms away, and hence a window size of > 65535 is clearly wayyy to large - lets fix this early? i'm just trying to slow the traffic (it's for my master|degree|pedigree thesis, so i don't want to waste all my life on this) without changing the window size. > How do we fit this thing into the linux QOS architecture anyway? i'm writing a scheduler that just delay the ack rate (it's in a very preliminar state, so nearly nothing was done). now i'm looking for a place where to put the flow information (in a conntrack module, maybe?) bye! p.s. sorry for my bad english... -- BOFH excuse #266: All of the packets are empty. ___ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/