Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-06 Thread Vadtec
Ok, I messed around with 6 different setups over 10 hours yesterday. The 
only one I can get to work properly is my original one.


So, now I'm to the theory stage of trying to figure this out. I got a 
reply from a mailing list user saying I need to do egress filtering in 
two places.


While I could not understand what they were saying very well, it did 
leave me to ponder this theory. It seems to me the whole problem has 
been how I am handling ingress traffic on eth0 (WAN interface). As it 
stands, I do rate limit it and will drop if its coming in to fast. But 
is there anything thats stopping me from routing ingress traffic through 
the egress queues on its way to the LAN? Or will that seriously break 
traffic shaping?


Is what I'm thinking is, the ingress qdisc doesn't really control 
anything. So, if I were to route it (say with an iptables rule) to an 
egress qdisc on eth1, I could truly control ingress traffic.


I really don't think this will work as it seems like I am quashing all 
the traffic down one side of what should be a two sided link. While I 
cannot think of a way to visualize this with ASCII art, I can summarize 
the ingress and egress pathways in linear format, as such:


  Egress (LAN to Internet)
 LAN traffic --- eth1 (egress) --- eth0 (egress) --- WAN 
--
|   
   
  |
|   
   
  |
|   
   
  |
|  Ingress (Internet to LAN)
   
|
LAN --- eth1 (ingress) --- eth0 (egress to eth1 ingress) --- 
eth0 (ingress) --- WAN traffic 


or

  Egress (LAN to Internet)
 LAN traffic --- eth1 (egress) --- eth0 (egress) --- WAN 
--
|   
   
  |
|   
   
  |
|   
   
  |
|  Ingress (Internet to LAN)
   
|
LAN --- eth1 (egress) --- eth0 (ingress to eth1 ingress) --- 
eth0 (ingress) --- WAN traffic 


I hate to be so pessimistic. But so far all I've gotten is everyone 
saying You need to filter ingress traffic with no real or concrete 
examples of how to do such a thing. And the LARTC How To doesn't 
describe it very well either. It's like ingress filtering is just not 
done, and those that do it are using such complicated methods that it's 
not worth sharing them.


So, unless someone can provide me with a concrete example of true 
ingress filtering, or how to filter ingress on the LAN side or WAN side 
or whichever side I need to filter it on, I am completely stuck.


Vadtec
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-06 Thread Vadtec

David Boreham wrote:

The advice you received is pretty good.
Avoid ingress shaping at all costs, and
you don't need it anyway for your situation.

Use egress shaping on both your internal and
external interfaces.
Traffic coming IN to your network gets shaped
as egress traffic on the LAN interface.
Traffic going OUT from your network gets
shaped as egress traffic on the WAN interface.
So all shaping is egress, but you're able to
shape in both directions by always delaying
packets as they are SENT by your router.

Think of it this way : all you can really do is
delay sending packets (or ultimately drop them
which is the same as infinitely delaying).
Packets arrive when they arrive, you have no
control over that. This is why shaping has to be
done on egress traffic -- it's the only lever you have
to pull on.



Thank you! This is the first explanation that has actually made sense to 
me. Before I had a vague idea of what was being said.


I do have one question though. On the egress shaping on eth1 (LAN 
interface), when using iptables I should do everything in the 
POSTROUTING chain correct? That way it gets routed to the proper LAN 
node and still gets shaped, correct? If thats the case, I can have a 
setup working in no time (I hope).


Many thanks,

Vadtec
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-06 Thread David Boreham





I do have one question though. On the egress shaping on eth1 (LAN 
interface), when using iptables I should do everything in the 
POSTROUTING chain correct? That way it gets routed to the proper LAN 
node and still gets shaped, correct? If thats the case, I can have a 
setup working in no time (I hope).

Well now it's my turn to be confused ! What's the connection between
iptables and your traffic shaping setup ? Are you marking packets for
shaping, something like that ? If so I have no idea. My routing and
shaping are completely separate and unrelated in any way (I use
tc filter to classify packets).


___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-06 Thread Vadtec

David Boreham wrote:





I do have one question though. On the egress shaping on eth1 (LAN 
interface), when using iptables I should do everything in the 
POSTROUTING chain correct? That way it gets routed to the proper LAN 
node and still gets shaped, correct? If thats the case, I can have a 
setup working in no time (I hope).

Well now it's my turn to be confused ! What's the connection between
iptables and your traffic shaping setup ? Are you marking packets for
shaping, something like that ? If so I have no idea. My routing and
shaping are completely separate and unrelated in any way (I use
tc filter to classify packets).



iptables (I forget what version onward supports this) has the ability to 
classify packets that it routes via -j CLASSIFY --set-class X:Y. For all 
intents and purposes, this is the same as -j MARK --set-mark X. 
Basically, it allows a user to use iptables to do the actual 
classification of the packet. For an example, take a look at: 
http://lartc.org/howto/lartc.cookbook.fullnat.intro.html, 
http://www.stanford.edu/~fenn/linux/ (this is the tutorial I base my 
iptables and tc rules on)


I am not familiar enough with tc filter syntax to brave it yet. Hence my 
use of iptables to classify packets, which I think is much easier anyways.


I was simply asking whether I would classify the packets in PREROUTING, 
OUTPUT, or POSTROUTING. I assume OUTPUT or POSTROUTING will let me 
achieve my goals with respect to egress on eth1. Guess it's time to 
enter my dark age of tc+iptables. :P


Sorry if I confused you with my question. It's the only way I know how 
to traffic shape (so far).


Vadtec
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-05 Thread Vadtec
After another two days of trying to get this to work like I think it 
should, I've still hit a brick wall.


As I am at a total loss as to what's going on, I am providing my shell 
script that I use to setup my

tc rules.

You will notice a big section that is commented out. Those are rules I 
have been using that (for

all intents and purposes) for like I expect.

I adapted my section based on the example I found at
http://lartc.org/howto/lartc.cookbook.ultimate-tc.html section 15.8.3

#!/bin/sh

IFext='eth0'
IFint='eth1'
rc_done=  done
rc_failed=  failed

MAX_RATE='360kbit'
MAX_RATEA='360'
INGRESS_RATE='1400kbit'

return=$rc_done

TC='/sbin/tc'

tc_reset ()
{
   # Reset everything to a known state (cleared)
   $TC qdisc del dev $IFext root 2 /dev/null  /dev/null
}

tc_status ()
{
   echo [qdisc - $IFext]
   $TC -s qdisc show dev $IFext
   echo 
   echo
   echo [class - $IFext]
   $TC -s class show dev $IFext
}

tc_showfilter ()
{
   echo [filter - $IFext]
   $TC -s filter show dev $IFext
}

case $1 in

   start)
   echo -n Starting traffic shaping
   tc_reset
#U320=$TC filter add dev $IFext protocol ip parent 1:0 prio 0 u32

   $TC qdisc del dev eth0 root
   $TC qdisc del dev eth0 ingress

   # uplink
   # dev eth0
   $TC qdisc add dev $IFext root handle 1: htb default 20
   $TC class add dev $IFext parent 1: classid 1:1 htb rate $MAX_RATE 
burst 6k
   $TC class add dev $IFext parent 1:1 classid 1:10 htb rate 240kbit 
ceil $MAX_RATE burst 6k prio 1
   $TC class add dev $IFext parent 1:1 classid 1:20 htb rate 200kbit 
ceil $[9*$MAX_RATEA/10]kbit burst 6k prio 2

   $TC qdisc add dev $IFext parent 1:10 handle 10: sfq perturb 5
   $TC qdisc add dev $IFext parent 1:20 handle 20: sfq perturb 5

   $TC filter add dev $IFext parent 1:0 protocol ip prio 10 u32 match 
ip tos 0x10 0xff flowid 1:10
   $TC filter add dev $IFext parent 1:0 protocol ip u32 match ip 
protocol 1 0xff flowid 1:10
   $TC filter add dev $IFext parent 1: protocol ip prio 10 u32 match ip 
protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x 0xffc0 at 2 
match u8 0x10 0xff at 3 flowid 1:10


   # downlink
   # dev eth0
   tc qdisc add dev $IFext handle : ingress

   $TC filter add dev $IFext parent : protocol ip prio 50 u32 match 
ip src 0.0.0.0/0 police rate $INGRESS_RATE burst 10k drop flowid :1


   #
   # dev eth0 - creating qdiscs  classes
   #
   #$TC qdisc add dev $IFext root handle 1: htb default 90
   #$TC class add dev $IFext parent 1: classid 1:1 htb rate $MAX_RATE
   #$TC class add dev $IFext parent 1:1 classid 1:10 htb rate 240kbit 
ceil $MAX_RATE prio 0
   #$TC class add dev $IFext parent 1:1 classid 1:20 htb rate 192kbit 
ceil $MAX_RATE prio 1
   #$TC class add dev $IFext parent 1:1 classid 1:30 htb rate 80kbit 
ceil $MAX_RATE prio 2
   #$TC class add dev $IFext parent 1:1 classid 1:40 htb rate 64kbit 
ceil $MAX_RATE prio 3
   #$TC class add dev $IFext parent 1:1 classid 1:50 htb rate 32kbit 
ceil $MAX_RATE prio 4
   #$TC class add dev $IFext parent 1:1 classid 1:60 htb rate 20kbit 
ceil $MAX_RATE prio 5
   #$TC class add dev $IFext parent 1:1 classid 1:70 htb rate 16kbit 
ceil $MAX_RATE prio 6
   #$TC class add dev $IFext parent 1:1 classid 1:80 htb rate 8kbit 
ceil $MAX_RATE prio 7
   #$TC class add dev $IFext parent 1:1 classid 1:90 htb rate 2kbit 
ceil $MAX_RATE prio 8
   #$TC class add dev $IFext parent 1:1 classid 1:100 htb rate 2kbit 
ceil 20kbit prio 9

   #$TC qdisc add dev $IFext parent 1:10 handle 10: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:20 handle 20: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:30 handle 30: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:40 handle 40: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:50 handle 50: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:60 handle 60: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:70 handle 70: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:80 handle 80: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:90 handle 90: sfq perturb 10
   #$TC qdisc add dev $IFext parent 1:100 handle 100: sfq perturb 10
   tc_status
   ;;

stop)
echo -n Stopping traffic shaper
tc_reset || return=$rc_failed
echo -e $return
;;
  
   restart|reload)

   $0 stop
   $0 start || return=$rc_failed
   ;;
  
   stats|status)

   tc_status
   ;;

   filter)
   tc_showfilter
   ;;

   *)
   echo Usage: $0 {start|stop|restart|stats|filter}
   exit 1

esac
test $return = $rc_done || exit 1



Regardless of the values I use for any of the rates, I still have the 
same problem. I do not classify ANY traffic with iptables for this set 
of tc filters, so any traffic that is generated is solely shaped by tc 
in this case.


The problem I have is this, whenever I allow any torrent client to use 
above 20k of outgoing bandwidth, *everything* becomes laggy for some 
reason. Most notable is SSH. While I have no accurate way to time the 
lag, as near as I can tell it lags about 2 seconds. 

Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-05 Thread Martin A. Brown
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Greetings again Vadtec,

 : After another two days of trying to get this to work like I think 
 : it should, I've still hit a brick wall.

This can be frustrating, I know.

[ snip ]  (I admit, I didn't look at the rules very closely, but 
   fundamentally the rules themselves look good.)

You do use an ingress filter with a policer.  Since the policer 
applies equally to all inbound traffic flows, it doesn't really do 
you any good here.

 : Regardless of the values I use for any of the rates, I still have 
 : the same problem. I do not classify ANY traffic with iptables for 
 : this set of tc filters, so any traffic that is generated is 
 : solely shaped by tc in this case.
 : 
 : The problem I have is this, whenever I allow any torrent client 
 : to use above 20k of outgoing bandwidth, *everything* becomes 
 : laggy for some reason. Most notable is SSH. While I have no 
 : accurate way to time the lag, as near as I can tell it lags about 
 : 2 seconds. When I press a key on my keyboard it takes ~2 seconds 
 : to show up in the SSH client. Or, if I enter lets say ls and 
 : press enter, it takes roughly 2 seconds for the ls output to 
 : reach me, and while its being displayed, its choppy appearing (as 
 : in it comes in chunks rather than a nice stream).
 : 
 : I see absolutely no reason for this to be happening. Why should 
 : anything lag when I'm using more outgoing bandwidth? Why would 
 : more outgoing bandwidth cause a slow down on incoming bandwidth. 
 : Or, why would more outgoing bandwidth slow down the filters/tc? 
 : I've verified that this happens on more than just my modem. I've 
 : used two different routers and a cable modem. All suffer from the 
 : same symptoms.

You are neglecting one important consideration, and that is the 
download traffic.  You may well be shaping the upstream traffic, but 
the queues transmitting downstream are also full.

So, once again--I'd suggest that you consider what it takes for your 
ls keystrokes and then the output to make it back to you.

  * you press a key (l), your ssh client transmits a packet
  * this packet goes across your congested link (probably 
significantly delayed, because there's congestion here)
  * the packet arrives on the ultimate destination
  * the sshd on the remote side receives the packet, passes it to
the application layer (blah blah, bash, fork, exec, ls)
  * the sshd receives data from the application layer and transmits 
a packet
  * your traffic control structures take this inbound ssh packet and 
give it its dedicated slot (however you have built your queues)
  * packet returns quickly to your system

So, your shaping is great, but you aren't shaping all of the 
paths through which your application flow must pass.

 : For the record, my router PC is built as such:
 : CentOS 5 64bit
 : AMD Sempron 2800+ (64bit, 1.6Ghz)
 : 2GB of DDR
 : 2GB of swap
 : 
 : As you can see, this PC is more than capable of acting as my router.

And then some.  Very reasonable looking box.

Good luck,

- -Martin

- -- 
Martin A. Brown
http://linux-ip.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: pgf-0.72 (http://linux-ip.net/sw/pine-gpg-filter/)

iD8DBQFG32pfHEoZD1iZ+YcRAp8DAKDDb7eO6/cNZp+lLg8tyO07QffzuQCfcTA3
DnHkDHC/Ea09qNsuEBhi+vs=
=VBjg
-END PGP SIGNATURE-
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-05 Thread Vadtec

Martin A. Brown wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Greetings again Vadtec,

 : After another two days of trying to get this to work like I think 
 : it should, I've still hit a brick wall.


This can be frustrating, I know.

[ snip ]  (I admit, I didn't look at the rules very closely, but 
   fundamentally the rules themselves look good.)


You do use an ingress filter with a policer.  Since the policer 
applies equally to all inbound traffic flows, it doesn't really do 
you any good here.


 : Regardless of the values I use for any of the rates, I still have 
 : the same problem. I do not classify ANY traffic with iptables for 
 : this set of tc filters, so any traffic that is generated is 
 : solely shaped by tc in this case.
 : 
 : The problem I have is this, whenever I allow any torrent client 
 : to use above 20k of outgoing bandwidth, *everything* becomes 
 : laggy for some reason. Most notable is SSH. While I have no 
 : accurate way to time the lag, as near as I can tell it lags about 
 : 2 seconds. When I press a key on my keyboard it takes ~2 seconds 
 : to show up in the SSH client. Or, if I enter lets say ls and 
 : press enter, it takes roughly 2 seconds for the ls output to 
 : reach me, and while its being displayed, its choppy appearing (as 
 : in it comes in chunks rather than a nice stream).
 : 
 : I see absolutely no reason for this to be happening. Why should 
 : anything lag when I'm using more outgoing bandwidth? Why would 
 : more outgoing bandwidth cause a slow down on incoming bandwidth. 
 : Or, why would more outgoing bandwidth slow down the filters/tc? 
 : I've verified that this happens on more than just my modem. I've 
 : used two different routers and a cable modem. All suffer from the 
 : same symptoms.


You are neglecting one important consideration, and that is the 
download traffic.  You may well be shaping the upstream traffic, but 
the queues transmitting downstream are also full.


So, once again--I'd suggest that you consider what it takes for your 
ls keystrokes and then the output to make it back to you.


  * you press a key (l), your ssh client transmits a packet
  * this packet goes across your congested link (probably 
significantly delayed, because there's congestion here)

  * the packet arrives on the ultimate destination
  * the sshd on the remote side receives the packet, passes it to
the application layer (blah blah, bash, fork, exec, ls)
  * the sshd receives data from the application layer and transmits 
a packet
  * your traffic control structures take this inbound ssh packet and 
give it its dedicated slot (however you have built your queues)

  * packet returns quickly to your system

So, your shaping is great, but you aren't shaping all of the 
paths through which your application flow must pass.
  
Ok, so answer me this: Does tc use the same general queue for egress and 
ingress traffic?
Or is it a matter of tc just using a default ingress filter that is 
filling for some reason?


Either way, it still doesn't explain why everything starts to lag as 
soon as one applications outgoing
bandwidth climbs anything above 20.5k (I tested to see when the break 
was). If the ingress queue
works just fine at 20k, why does it not work just fine at 21k? That's 
my central issue. If I can figure
that out, I can most likely filter differently to insure proper 
performance is maintained.

 : For the record, my router PC is built as such:
 : CentOS 5 64bit
 : AMD Sempron 2800+ (64bit, 1.6Ghz)
 : 2GB of DDR
 : 2GB of swap
 : 
 : As you can see, this PC is more than capable of acting as my router.


And then some.  Very reasonable looking box.

Good luck,

- -Martin

- -- 
Martin A. Brown

http://linux-ip.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: pgf-0.72 (http://linux-ip.net/sw/pine-gpg-filter/)

iD8DBQFG32pfHEoZD1iZ+YcRAp8DAKDDb7eO6/cNZp+lLg8tyO07QffzuQCfcTA3
DnHkDHC/Ea09qNsuEBhi+vs=
=VBjg
-END PGP SIGNATURE-

  


Thanks for the input Martin. I really do appreciate the time you have 
taken with trying to help me.


In an effort to make this work better, I am currently working on setting 
up both egress and ingress qdiscs.
(You're e-mail prompted me to take a break. :P) For the sake of 
simplicity, I am going to setup my ingress
qdiscs (practically) the same way as I do my egress filters (even though 
they may not get used, I will still
set them up for the time being just in case I need them). Then, using fw 
marking, I'm going to filter using
iptables on both egress and ingress (though to be honest I really have 
no idea how I'm going to do the ingress
filtering and get it back to the right place, though I assume it will 
take care of it self, just like egress).


Maybe once I do this I will start to see improvements. You can expect 
that in a few days I will post again when
I have exhausted all my (misguided or otherwise) ideas about how to make 
this work.


Thanks again,



Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-04 Thread Vadtec

Martin A. Brown wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Greetings again,

 : So you are saying I have to not only do traffic shaping, but also 
 : traffic policing on my internal device? Or do I have to do 
 : traffic shaping on both devices and no traffic policing? In other 
 : words, how much traffic shaping/policing do I need to put into 
 : effect, and on which interfaces.


Be careful.  Shaping and policing are two very different things.  I 
probably could have chosen a word other than policy to make this 
clear.  I'll restate this.


If you wish to shape upload traffic, then you must put your traffic 
control structures on the the device closest to the Internet.  If 
you wish to shape download traffic, then you must put your traffic 
control structures on the device closest to the internal network(s). 
( If you would like to see how you can break this basic rule, see 
some advanced traffic control structures called imq and ifb [*]. )
  
No need for advanced traffic controls for me. All I want to do is 
control how much band width
gets out to the internet. Internally it doesn't matter as its all CAT5, 
so there is plenty of data pipe.
 : I have no idea what that means. How do I vary the depth of the 
 : FIFO to help control latency?


OK, in order to vary the exact depth of the fifo to suit your fancy, 
you'd need to use terminal FIFOs instead of using terminal SFQs:


  tc qdisc add dev $INTERFACE parent 10:1 handle 100:0 pfifo limit 10

Assuming your HTB tree has a leaf class of 10:1, into which you have 
classified some of your traffic, you now have a very short queue 
there.  If a burst of traffic involving more than ten packets 
arrives, the traffic will tail drop.  In heavily congested 
situations, this can be useful, although this still allows for a 
single dominating UDP flow, so I'd probably prefer the SFQ solution, 
and maybe I shouldn't have brought up this little trick.
  

I think I'll stick to htb...
 : I ran both watch -n 1 tc -s class show dev eth0 and watch -n 1 tc 
 : -s qdisc show eth0. (When I ran class show, i did not have enough 
 : room to see classes 80 and 90. When I ran qdisc show, I was able 
 : to see all the classes.) During my runs of tc in this manner, I 
 : saw zero traffic going to class 100 when running, starting, or 
 : stopping bit torrent.


If you expect that the torrential traffic to appear in class 100, 
and it does not, then you have done something wrong in your 
classifier.  Look there.
  

By classifier I think you mean:
iptables -t mangle -A POSTROUTING -o ${IFext} -p tcp --dport 1:1 
-j CLASSIFY --set-class 1:100


And having looked at that, I see part of my problem. --dport should be 
--sport as I have no way to control what port
the other client wishes to use, but I do have control over what ports I 
use to send my data stream.
 : Almost all the traffic was going to class 10 and 90 (default) 
 : with the exception of my ICMP and UDP traffic which was going to 
 : class 70 and class 60 which I have set aside for IRC traffic. 
 : Class 100 saw absolutely zero traffic.
 : 
 : Is this a case where the default class (90) is getting all the 
 : traffic because it can handle it as my LAN has very little 
 : other traffic most of the time to deal with, so there is no 
 : need to throttle it back? If so, how can I force a particular 
 : class to be used regardless of the default, so that I can control 
 : individual apps by them selves?


Try looking at your classifiers (tc filter statements) to 
determine where the problem is.  If you can't figure it out, then 
send the list your classifiers and class structure (tc commands).
  
I ran tc filter on the command line, but received no output in return. 
I read the man page and it leads me to believe
that it's not meant for viewing the filters. I also did some packet 
sniffing on my router PC. I am not 100% sure why yet,
but, when I would filter out port 1-10004, wireshark would return 
packets that had source ports other than the
range I was looking for. While I cannot be sure as of yet, I think this 
is a result of the NAT process and the router box
is using whatever port it has available while allowing my bit torrent 
client to send on the proper ports. (I am still learning
about NAT, so I'm still confused about some things. To be honest, I 
would like to be able to NAT not only IPs, but
ports as well so that I do not have to reconfigure things like SSH to 
run on non-standard ports. But thats for another
mailing list. :P) One thing I did notice was, while I did see traffic in 
class 100 for the first time, my torrent client still showed
outgoing bandwidth of more than 20kbit. Is this simply a function of the 
router actually limiting the traffic and the torrent
client simply not knowing? Or (and I assume this is incorrect thinking) 
should the torrent client visibly indicate to me that

it can only send at X rate because its limited?

To make this much simpler, I will paste my tc rules 

Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-04 Thread Martin A. Brown
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Good morning,

 : By classifier I think you mean:
 : iptables -t mangle -A POSTROUTING -o ${IFext} -p tcp --dport 1:1 -j
 : CLASSIFY --set-class 1:100

Exactly.

 : And having looked at that, I see part of my problem. --dport should be
 : --sport as I have no way to control what port
 : the other client wishes to use, but I do have control over what ports I use
 : to send my data stream.

Yes.  We think about services (at TCP) as being on a standard port, 
but iptables operates at L3 (even though it can examine higher 
layers).  So, if the packet in question is a return packet from an 
HTTP server to a client, then as far as iptables is concerned, the 
source port is tcp/80.

 : I ran tc filter on the command line, but received no output in 
 : return. I read the man page and it leads me to believe that it's 
 : not meant for viewing the filters.

Depends, but yes, the tc filter output is not necessarily 
attractive.  You can classify with tc filter instead of iptables, 
but if you reach your goal, it doesn't matter which mechanism you 
use to classify your packets.

 : One thing I did notice was, while I did see traffic in class 100 
 : for the first time, my torrent client still showed outgoing 
 : bandwidth of more than 20kbit.

Well, a typical torrent client will use a number of connections.  
Perhaps some of the connections are classified correctly and some 
are not?

 : Is this simply a function of the router actually limiting the 
 : traffic and the torrent client simply not knowing? Or (and I 
 : assume this is incorrect thinking) should the torrent client 
 : visibly indicate to me that it can only send at X rate because 
 : its limited?

I would expect the torrent client to be reporting the actual 
speed(s), so I would expect it to be reporting 20kbit rates.

 : To make this much simpler, I will paste my tc rules and iptables 
 : rules (which classify my traffic) at the bottom of this e-mail. I 
 : hope you can find something (related specifically to bit torrent) 
 : that will allow me to limit torrent traffic without the need to 
 : limit each client by hand.

You are getting there.

 : So I am at a total loss as to why (outgoing) SSH traffic would 
 : become so slow, because it has access to more bandwidth than 
 : torrent (at least in my thinking).

Are you sure that it's outgoing SSH traffic that is slow?  Consider 
the following scenario:

  * your outgoing queues are configured completely correctly
  * outgoing ssh IP packet gets into correct queue
  * inbound return IP packet from ssh server is delayed inbound
because you are not shaping downstream traffic

So, before you conclude that your shaping isn't working, I think 
you'll need to apply some sort of mechanisms also on your internal 
interface.

Snipped iptables classification.  Nothing looks fishy to me there 
(though I haven't used the iptables CLASSIFY target).

 : $IFext is of course eth0 (link to the modem), $QUANTUM is 1490 
 : (due to, if I assume correctly, my MTU being 1492, in the modem), 
 : $MAX_RATE is 360kbit (360kbps conventional talk)

I wouldn't set quantum unless you need to do so.  HTB will calculate 
this for you.  If you do need to set the quantum, I'd recommend 
setting it just a bit larger than the MTU...so 1500 or 1536 in your 
pppoe situation.

Good luck,

- -Martin

- -- 
Martin A. Brown
http://linux-ip.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: pgf-0.72 (http://linux-ip.net/sw/pine-gpg-filter/)

iD8DBQFG3Vd1HEoZD1iZ+YcRAv0pAKDZ0qtpxyOkGJJQ7H5rWtFi3HlM2gCgrugd
WyARMeCjVI9TD/1CcTTDAsw=
=RKrP
-END PGP SIGNATURE-
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-04 Thread Vadtec

Martin A. Brown wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Good morning,

 : I ran tc filter on the command line, but received no output in 
 : return. I read the man page and it leads me to believe that it's 
 : not meant for viewing the filters.


Depends, but yes, the tc filter output is not necessarily 
attractive.  You can classify with tc filter instead of iptables, 
but if you reach your goal, it doesn't matter which mechanism you 
use to classify your packets.
  
Can you provide a simple example of how to filter with tc rather than 
iptables? Just enough of an idea
for me to grasp it on my own. I think it would be better to use tc to do 
the filtering rather than iptables,

as thats what tc is meant to do.
 : One thing I did notice was, while I did see traffic in class 100 
 : for the first time, my torrent client still showed outgoing 
 : bandwidth of more than 20kbit.


Well, a typical torrent client will use a number of connections.  
Perhaps some of the connections are classified correctly and some 
are not?
  
One thing I had failed to take into account was the possibility that bit 
torrent *may* be using some UDP
ports. In an attempt to test this theory, I added the udp filters to 
iptables and watched the data stream.
No change. -_- Bit torrent is still showing outgoing speeds of above 
20k. So, I then limitted the filter
to the LAN IP for my router box and forced my torrent client to use that 
IP. Still above 20k. Long story
short, I tried about 5 different things to get it to work properly. No 
luck. I think part of the problem is that
my torrent client doesn't seem to honor the port ranges I put into 
effect. Which would definitely allow traffic

to get past it.
 : Is this simply a function of the router actually limiting the 
 : traffic and the torrent client simply not knowing? Or (and I 
 : assume this is incorrect thinking) should the torrent client 
 : visibly indicate to me that it can only send at X rate because 
 : its limited?


I would expect the torrent client to be reporting the actual 
speed(s), so I would expect it to be reporting 20kbit rates.
  

As would I.
 : To make this much simpler, I will paste my tc rules and iptables 
 : rules (which classify my traffic) at the bottom of this e-mail. I 
 : hope you can find something (related specifically to bit torrent) 
 : that will allow me to limit torrent traffic without the need to 
 : limit each client by hand.


You are getting there.

 : So I am at a total loss as to why (outgoing) SSH traffic would 
 : become so slow, because it has access to more bandwidth than 
 : torrent (at least in my thinking).


Are you sure that it's outgoing SSH traffic that is slow?  Consider 
the following scenario:


  * your outgoing queues are configured completely correctly
  * outgoing ssh IP packet gets into correct queue
  * inbound return IP packet from ssh server is delayed inbound
because you are not shaping downstream traffic

So, before you conclude that your shaping isn't working, I think 
you'll need to apply some sort of mechanisms also on your internal 
interface.
  
Bah, I need to figure out outgoing before I mess with incoming. I'm 
liable to cut my self off

from the internet. :P
Snipped iptables classification.  Nothing looks fishy to me there 
(though I haven't used the iptables CLASSIFY target).


 : $IFext is of course eth0 (link to the modem), $QUANTUM is 1490 
 : (due to, if I assume correctly, my MTU being 1492, in the modem), 
 : $MAX_RATE is 360kbit (360kbps conventional talk)


I wouldn't set quantum unless you need to do so.  HTB will calculate 
this for you.  If you do need to set the quantum, I'd recommend 
setting it just a bit larger than the MTU...so 1500 or 1536 in your 
pppoe situation.
  
Ok, quantum has always been 1512 for me, so I assume tc is taking care 
of it.

Good luck,

- -Martin

- -- 
Martin A. Brown

http://linux-ip.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: pgf-0.72 (http://linux-ip.net/sw/pine-gpg-filter/)

iD8DBQFG3Vd1HEoZD1iZ+YcRAv0pAKDZ0qtpxyOkGJJQ7H5rWtFi3HlM2gCgrugd
WyARMeCjVI9TD/1CcTTDAsw=
=RKrP
-END PGP SIGNATURE-

  
So, I am at a total loss as to why this isn't working. Well, not a total 
loss, but enough of a total loss
as to disable some of the shaping I had been using  that was giving me 
fits. I think it's much better for

me to let the default class handle everything and go from there for now.

As I said above, can you provide a simple example of how to filter with 
tc? I think filtering in tc will be

both more appropriate and less hassling.

I really appreciate you taking the time to help me with this. I am by no 
means a noob to networking,
but when it comes to traffic shaping, I might as well be. Though, on a 
positive note, at least I've been able
to properly shape some traffic, like IRC. :) (I'm also going to contact 
the developers of my torrent client
and ask them about port range limiting. And why it doesn't 

Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-03 Thread Martin A. Brown
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Vadtec,

I think you may be making two of the most common problems facing 
novices working with traffic control, so I hope you don't mind my 
picking on you!

Problem #0
- --
You are applying your shaping on the outbound traffic (presuming 
that $IFext is your external interface.  Unless you also have 
shaping on your inbound traffic ($IFint?), then you are only 
applying shaping characteristics to the upload traffic.  This brings 
us back to two fundamental rules of traffic shaping:

  * For optimal results, your shaping device should be the 
bottleneck, so that it can act as the traffic flow valve.
  * You can only shape what you transmit.  If your edge device is 
performing shaping, then it should shape upload traffic by a 
policy applied to the external interface and it should shape 
download traffic by policy applied to the internal interface.

In short, add a similar set of HTB + SFQ queues to your internal 
interface, along with the appropriate classifiers and try again.

 : While I understand how/why TC enforces minimum bandwidth for a 
 : given class, why is it that for class 100 TC is not enforcing the 
 : cap of 20kbps to traffic that it is classified at? Is there 
 : something else I need to do to make TC also enforce arbitrary 
 : maximum limits for a given classification?
 :
 : I am on DSL internet with rates 1.5Mbps/384kbps. 

That 1.5Mbps (conventional networking terminology and units) is 
written as 1.5Mbit in terms used by tc.

Problem #1
- --
I think you may be making an error in your units.  This is one of 
the most frequent problems when people start using tc.  Since tc 
sprang from the primordial soup, the following units are used:

  bps = bytes per second
  bit = bits per second

The unfortunate problem with this marking for units is that we say 
many other places in networking that bits per second is bps.  This 
is not true with tc.  So, if I look at your rate specifications 
below, they look off by a factor of 8.  Please try altering all 
instances of kbps to kbit and try your script again. See also 
these URLs [0] [1] [2].

 : I do not make complete use of my pipe just in case of a massive 
 : burst. I know I will probably not burst such a massive burst, but 
 : its better to be safe than sorry.

This is wise.

 : Class 90 is the default. Class 100 is a special class, and what 
 : my question specifically relates to. Class 100 is for bit 
 : torrent. I do not like the other people in my house using very 
 : much bandwidth for torrenting as it has a tendency to slow things 
 : down to greatly.

If you place FIFOs in any of your HTB leaf classes, you can vary the 
depth of the FIFO queue to help control latency, in addition to that 
class's total throughput.  This is a cheap and dirty way to 
accomplish this task.

 : The problem I have is this: when I disable a given torrent 
 : clients upload limits, the bandwidth climbs to above the 20kbps 
 : limit I have set for it. When I classify the traffic in iptables, 
 : i put it into class 100, so it shouldn't getting put into the 
 : default class.

While you are starting and stopping your torrent client, you should 
also take a look at the class statistics:

  watch -n 1 tc -s class show dev $INTERFACE_NAME

This will allow you to see which class is being used to carry that 
traffic.

Good luck,

- -Martin

 [0] http://www.docum.org/docum.org/faq/cache/74.html
 [1] http://mailman.ds9a.nl/pipermail/lartc/2003q4/010826.html
 [2] http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm

- -- 
Martin A. Brown
http://linux-ip.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: pgf-0.72 (http://linux-ip.net/sw/pine-gpg-filter/)

iD8DBQFG3FYGHEoZD1iZ+YcRAr3NAKC2Iq1mtkEwd3edzU8mY6CQx/PuKgCggE0F
hcIyU0L25TYNwMkXGcjusWw=
=ifyk
-END PGP SIGNATURE-
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-03 Thread Vadtec

Martin A. Brown wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Vadtec,

I think you may be making two of the most common problems facing 
novices working with traffic control, so I hope you don't mind my 
picking on you!
  

Not at all. :)

Problem #0
- --
You are applying your shaping on the outbound traffic (presuming 
that $IFext is your external interface.  Unless you also have 
shaping on your inbound traffic ($IFint?), then you are only 
applying shaping characteristics to the upload traffic.  This brings 
us back to two fundamental rules of traffic shaping:
  
$IFext is eth0, and is my link to my DSL Modem. $IFint would be eth1, 
which is my interface
to my LAN (though I am not shaping traffic on it in any way). Both of 
these devices plug into
my 24 port switch, which the DSL modem is hooked into, as well as all 
the other computers in

the house.

So just for clarity sake, here are the port assignments on the switch:
Port 1 - DSL Modem uplink
Port 2 - $IFext (eth0) on my router PC
Port 3 - $IFint (eth1) on my router PC
Port 4 - My laptop
Port 5 - My brothers PC
Port 6/25 - Unused currently, but used at random as needed.
  * For optimal results, your shaping device should be the 
bottleneck, so that it can act as the traffic flow valve.
  * You can only shape what you transmit.  If your edge device is 
performing shaping, then it should shape upload traffic by a 
policy applied to the external interface and it should shape 
download traffic by policy applied to the internal interface.


In short, add a similar set of HTB + SFQ queues to your internal 
interface, along with the appropriate classifiers and try again.
  
So you are saying I have to not only do traffic shaping, but also 
traffic policing on my internal device?
Or do I have to do traffic shaping on both devices and no traffic 
policing? In other words, how much
traffic shaping/policing do I need to put into effect, and on which 
interfaces.
 : While I understand how/why TC enforces minimum bandwidth for a 
 : given class, why is it that for class 100 TC is not enforcing the 
 : cap of 20kbps to traffic that it is classified at? Is there 
 : something else I need to do to make TC also enforce arbitrary 
 : maximum limits for a given classification?

 :
 : I am on DSL internet with rates 1.5Mbps/384kbps. 

That 1.5Mbps (conventional networking terminology and units) is 
written as 1.5Mbit in terms used by tc.


Problem #1
- --
I think you may be making an error in your units.  This is one of 
the most frequent problems when people start using tc.  Since tc 
sprang from the primordial soup, the following units are used:


  bps = bytes per second
  bit = bits per second

The unfortunate problem with this marking for units is that we say 
many other places in networking that bits per second is bps.  This 
is not true with tc.  So, if I look at your rate specifications 
below, they look off by a factor of 8.  Please try altering all 
instances of kbps to kbit and try your script again. See also 
these URLs [0] [1] [2].
  
So, what you are saying is, its just a matter of different naming. In 
essence,

368kbps (conventional) is the same as 368kbit (tc), right?
 : I do not make complete use of my pipe just in case of a massive 
 : burst. I know I will probably not burst such a massive burst, but 
 : its better to be safe than sorry.


This is wise.
  

:)
 : Class 90 is the default. Class 100 is a special class, and what 
 : my question specifically relates to. Class 100 is for bit 
 : torrent. I do not like the other people in my house using very 
 : much bandwidth for torrenting as it has a tendency to slow things 
 : down to greatly.


If you place FIFOs in any of your HTB leaf classes, you can vary the 
depth of the FIFO queue to help control latency, in addition to that 
class's total throughput.  This is a cheap and dirty way to 
accomplish this task.
  
I have no idea what that means. How do I vary the depth of the FIFO to 
help control latency?
 : The problem I have is this: when I disable a given torrent 
 : clients upload limits, the bandwidth climbs to above the 20kbps 
 : limit I have set for it. When I classify the traffic in iptables, 
 : i put it into class 100, so it shouldn't getting put into the 
 : default class.


While you are starting and stopping your torrent client, you should 
also take a look at the class statistics:


  watch -n 1 tc -s class show dev $INTERFACE_NAME

This will allow you to see which class is being used to carry that 
traffic.
  
I ran both watch -n 1 tc -s class show dev eth0 and watch -n 1 tc -s 
qdisc show eth0.
(When I ran class show, i did not have enough room to see classes 80 and 
90. When I ran qdisc
show, I was able to see all the classes.) During my runs of tc in this 
manner, I saw zero traffic going
to class 100 when running, starting, or stopping bit torrent. Almost all 
the traffic was going to class 10
and 90 (default) with the exception of my 

Re: [LARTC] Question about how TC enforces bandwidth limiting

2007-09-03 Thread Martin A. Brown
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Greetings again,

 : $IFext is eth0, and is my link to my DSL Modem. $IFint would be 
 : eth1, which is my interface to my LAN (though I am not shaping 
 : traffic on it in any way).

Yes, exactly.  You grasped this below, too.  Recall that you can 
only shape what you transmit?  Think about why.  If you already have 
a packet, you can delay the transmission.  This is the fundamental 
behaviour of all non-work conserving queues.  To introduce a delay 
to a packet involves work.

 : So you are saying I have to not only do traffic shaping, but also 
 : traffic policing on my internal device? Or do I have to do 
 : traffic shaping on both devices and no traffic policing? In other 
 : words, how much traffic shaping/policing do I need to put into 
 : effect, and on which interfaces.

Be careful.  Shaping and policing are two very different things.  I 
probably could have chosen a word other than policy to make this 
clear.  I'll restate this.

If you wish to shape upload traffic, then you must put your traffic 
control structures on the the device closest to the Internet.  If 
you wish to shape download traffic, then you must put your traffic 
control structures on the device closest to the internal network(s). 
( If you would like to see how you can break this basic rule, see 
some advanced traffic control structures called imq and ifb [*]. )

 : So, what you are saying is, its just a matter of different 
 : naming. In essence, 368kbps (conventional) is the same as 368kbit 
 : (tc), right?

Bingo.

 : I have no idea what that means. How do I vary the depth of the 
 : FIFO to help control latency?

OK, in order to vary the exact depth of the fifo to suit your fancy, 
you'd need to use terminal FIFOs instead of using terminal SFQs:

  tc qdisc add dev $INTERFACE parent 10:1 handle 100:0 pfifo limit 10

Assuming your HTB tree has a leaf class of 10:1, into which you have 
classified some of your traffic, you now have a very short queue 
there.  If a burst of traffic involving more than ten packets 
arrives, the traffic will tail drop.  In heavily congested 
situations, this can be useful, although this still allows for a 
single dominating UDP flow, so I'd probably prefer the SFQ solution, 
and maybe I shouldn't have brought up this little trick.

 : I ran both watch -n 1 tc -s class show dev eth0 and watch -n 1 tc 
 : -s qdisc show eth0. (When I ran class show, i did not have enough 
 : room to see classes 80 and 90. When I ran qdisc show, I was able 
 : to see all the classes.) During my runs of tc in this manner, I 
 : saw zero traffic going to class 100 when running, starting, or 
 : stopping bit torrent.

If you expect that the torrential traffic to appear in class 100, 
and it does not, then you have done something wrong in your 
classifier.  Look there.

 : Almost all the traffic was going to class 10 and 90 (default) 
 : with the exception of my ICMP and UDP traffic which was going to 
 : class 70 and class 60 which I have set aside for IRC traffic. 
 : Class 100 saw absolutely zero traffic.
 : 
 : Is this a case where the default class (90) is getting all the 
 : traffic because it can handle it as my LAN has very little 
 : other traffic most of the time to deal with, so there is no 
 : need to throttle it back? If so, how can I force a particular 
 : class to be used regardless of the default, so that I can control 
 : individual apps by them selves?

Try looking at your classifiers (tc filter statements) to 
determine where the problem is.  If you can't figure it out, then 
send the list your classifiers and class structure (tc commands).

I'd skip playing with terminal FIFOs at first (if ever).  Try 
working on the following:

  * switch the units to kbit
  * fix the classifiers and verify that you are seeing all traffic
in the class you want to see the traffic in
  * turn the torrent client up to insane and watch how your IRC or 
ssh latency is very low

Good luck,

- -Martin

 [*] IMQ, intermediate queuing device
 http://www.linuximq.net/
 IFB, intermediate functional block
 http://linux-net.osdl.org/index.php/IFB

- -- 
Martin A. Brown
http://linux-ip.net/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: pgf-0.72 (http://linux-ip.net/sw/pine-gpg-filter/)

iD4DBQFG3L5sHEoZD1iZ+YcRAso6AJjnbMjcso8t4+GugFC6eHQOyqJQAJ9kpWbZ
sOS369AbZkPQ9rg3rhiawg==
=gvDh
-END PGP SIGNATURE-
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc