Re: [LARTC] TC (HTB) doesn't work well when network is congested?

2007-10-31 Thread William Xu

Thank you, Peter,

After changing CONFIG_HZ to 1000, TC works much better. I still need to 
limit the total bandwidth

to 110MB/s (about 90% of 125MB/s), but that's normal I guess.

Thanks again,
william
Peter Rabbitson wrote:


William Xu wrote:


Hi Peter, thanks for looking at this.

Here are the information I got after running tests. The client1 got 
7MB/s instead of 40MB/s for SEND,

and 40MB/s for RECV during the test.

Thanks,
william

# ip link show

...
5: eth2: BROADCAST,MULTICAST,UP,LUP mtu 9000 qdisc htb qlen 1000
   link/ether 00:e0:ed:04:9f:a2 brd ff:ff:ff:ff:ff:ff
...
12: ifb0: BROADCAST,NOARP,UP,LUP mtu 9000 qdisc htb qlen 32
   link/ether f2:f2:77:f9:cf:30 brd ff:ff:ff:ff:ff:ff

#tc qdisc show
qdisc pfifo_fast 0: dev eth0 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1

qdisc ingress : dev eth2 
qdisc htb 1: dev eth2 r2q 100 default 30 direct_packets_stat 0
qdisc pfifo_fast 0: dev eth3 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth7 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth8 root bands 3 priomap  1 2 2 2 1 2 0 0 1 
1 1 1 1 1 1 1

qdisc htb 1: dev ifb0 r2q 100 default 30 direct_packets_stat 0

#tc -s -d class show dev ifb0
class htb 1:10 parent 1:1 prio 0 quantum 20 rate 32Kbit ceil 
96Kbit burst 169000b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 
0b overhead 0b level 0

Sent 2366125838 bytes 928639 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 925807 borrowed: 2832 giants: 0
tokens: 4224 ctokens: 4075

class htb 1:1 root rate 96Kbit ceil 96Kbit burst 489000b/64 
mpu 0b overhead 0b cburst 489000b/64 mpu 0b overhead 0b level 7

Sent 36927678674 bytes 6132723 pkt (dropped 0, overlimits 0 requeues 0)
rate 2672bit 1pps backlog 0b 0p requeues 0
lended: 1131873 borrowed: 0 giants: 0
tokens: 4074 ctokens: 4074

class htb 1:30 parent 1:1 prio 1 quantum 20 rate 64Kbit ceil 
96Kbit burst 328960b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 
0b overhead 0b level 0

Sent 34561552836 bytes 5204084 pkt (dropped 44, overlimits 0 requeues 0)
rate 528bit 0pps backlog 0b 0p requeues 0
lended: 4075043 borrowed: 1129041 giants: 0
tokens: 4108 ctokens: 4074

#tc -s -d class show dev eth2
class htb 1:10 parent 1:1 prio 0 quantum 20 rate 32Kbit ceil 
96Kbit burst 169000b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 
0b overhead 0b level 0

Sent 12092794712 bytes 1544210 pkt (dropped 0, overlimits 0 requeues 0)
rate 56bit 0pps backlog 0b 0p requeues 0
lended: 1543687 borrowed: 523 giants: 0
tokens: 4224 ctokens: 4075

class htb 1:1 root rate 96Kbit ceil 96Kbit burst 489000b/64 
mpu 0b overhead 0b cburst 489000b/64 mpu 0b overhead 0b level 7

Sent 36872760531 bytes 7346321 pkt (dropped 0, overlimits 0 requeues 0)
rate 288bit 0pps backlog 0b 0p requeues 0
lended: 40477 borrowed: 0 giants: 0
tokens: 4073 ctokens: 4073

class htb 1:30 parent 1:1 prio 1 quantum 20 rate 64Kbit ceil 
96Kbit burst 328960b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 
0b overhead 0b level 0

Sent 24779965819 bytes 5802111 pkt (dropped 0, overlimits 0 requeues 0)
rate 176bit 0pps backlog 0b 0p requeues 0
lended: 5762157 borrowed: 39954 giants: 0
tokens: 4109 ctokens: 4073




The setup looks good. There are only two things that come to mind - 
you have problems with TSO, or your clock is too slow. For the first 
use ethtool -K to disable all 6 offloading parameters. For the second 
check what is the value of CONFIG_HZ in the current kernel config 
(/boot/config-your-kernel-version), and if it is less than 1000 this 
might be your problem as well. If none of those help - I am out of 
ideas, hopefully someone else can help you.


Peter



___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] TC (HTB) doesn't work well when network is congested?

2007-10-26 Thread William Xu

Hi Peter, thanks for looking at this.

Here are the information I got after running tests. The client1 got 
7MB/s instead of 40MB/s for SEND,

and 40MB/s for RECV during the test.

Thanks,
william

# ip link show

...
5: eth2: BROADCAST,MULTICAST,UP,LUP mtu 9000 qdisc htb qlen 1000
   link/ether 00:e0:ed:04:9f:a2 brd ff:ff:ff:ff:ff:ff
...
12: ifb0: BROADCAST,NOARP,UP,LUP mtu 9000 qdisc htb qlen 32
   link/ether f2:f2:77:f9:cf:30 brd ff:ff:ff:ff:ff:ff

#tc qdisc show
qdisc pfifo_fast 0: dev eth0 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1

qdisc ingress : dev eth2 
qdisc htb 1: dev eth2 r2q 100 default 30 direct_packets_stat 0
qdisc pfifo_fast 0: dev eth3 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1
qdisc pfifo_fast 0: dev eth7 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1
qdisc pfifo_fast 0: dev eth8 root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 
1 1 1 1 1

qdisc htb 1: dev ifb0 r2q 100 default 30 direct_packets_stat 0

#tc -s -d class show dev ifb0
class htb 1:10 parent 1:1 prio 0 quantum 20 rate 32Kbit ceil 
96Kbit burst 169000b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b 
overhead 0b level 0

Sent 2366125838 bytes 928639 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 925807 borrowed: 2832 giants: 0
tokens: 4224 ctokens: 4075

class htb 1:1 root rate 96Kbit ceil 96Kbit burst 489000b/64 mpu 
0b overhead 0b cburst 489000b/64 mpu 0b overhead 0b level 7

Sent 36927678674 bytes 6132723 pkt (dropped 0, overlimits 0 requeues 0)
rate 2672bit 1pps backlog 0b 0p requeues 0
lended: 1131873 borrowed: 0 giants: 0
tokens: 4074 ctokens: 4074

class htb 1:30 parent 1:1 prio 1 quantum 20 rate 64Kbit ceil 
96Kbit burst 328960b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b 
overhead 0b level 0

Sent 34561552836 bytes 5204084 pkt (dropped 44, overlimits 0 requeues 0)
rate 528bit 0pps backlog 0b 0p requeues 0
lended: 4075043 borrowed: 1129041 giants: 0
tokens: 4108 ctokens: 4074

#tc -s -d class show dev eth2
class htb 1:10 parent 1:1 prio 0 quantum 20 rate 32Kbit ceil 
96Kbit burst 169000b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b 
overhead 0b level 0

Sent 12092794712 bytes 1544210 pkt (dropped 0, overlimits 0 requeues 0)
rate 56bit 0pps backlog 0b 0p requeues 0
lended: 1543687 borrowed: 523 giants: 0
tokens: 4224 ctokens: 4075

class htb 1:1 root rate 96Kbit ceil 96Kbit burst 489000b/64 mpu 
0b overhead 0b cburst 489000b/64 mpu 0b overhead 0b level 7

Sent 36872760531 bytes 7346321 pkt (dropped 0, overlimits 0 requeues 0)
rate 288bit 0pps backlog 0b 0p requeues 0
lended: 40477 borrowed: 0 giants: 0
tokens: 4073 ctokens: 4073

class htb 1:30 parent 1:1 prio 1 quantum 20 rate 64Kbit ceil 
96Kbit burst 328960b/64 mpu 0b overhead 0b cburst 489000b/64 mpu 0b 
overhead 0b level 0

Sent 24779965819 bytes 5802111 pkt (dropped 0, overlimits 0 requeues 0)
rate 176bit 0pps backlog 0b 0p requeues 0
lended: 5762157 borrowed: 39954 giants: 0
tokens: 4109 ctokens: 4073


Peter Rabbitson wrote:


William Xu wrote:

So TC works well as long as total bandwidth is below 90MB/s, which is 
about 70% of the
wise speed. Is it possible that I can use the full bandwidth 
(122MB/s) in my script?




In order to troubleshoot further more info is needed:

1) execute your script with 120MB/s as limit
2) perform a test transfer for several minutes
3) post back the output of the following commands:
ip link show
tc qdisc show
tc -s -d class show dev ifb0
tc -s -d class show dev eth2


Peter



___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] TC (HTB) doesn't work well when network is congested?

2007-10-25 Thread William Xu

Hi,

I have a server and ten clients in a Gigabit network. The server has 
125mbps network bandwidth.
I want that the server has 40Mbps bandwidth reserved for client 1 (IP 
192.168.5.141), and the

rest bandwidth is for all other clients.

My script looks like this (I use IFB for incoming traffic):

#!/bin/bash

export TC=/sbin/tc

$TC qdisc add dev ifb0 root handle 1: htb default 30 r2q 100

$TC class add dev ifb0 parent 1:0 classid 1:1 htb rate 125mbps mtu 9000
$TC class add dev ifb0 parent 1:1 classid 1:10 htb rate 40mbps ceil 
125mbps mtu 9000 prio 0
$TC class add dev ifb0 parent 1:1 classid 1:30 htb rate 85mbps ceil 
125mbps mtu 9000 prio 1


$TC filter add dev ifb0 parent 1: protocol ip prio 1 u32 match ip src 
192.168.5.141/32 flowid 1:10


$TC qdisc add dev eth2 ingress
$TC filter add dev eth2 parent : protocol ip prio 1 u32 \
   match u32 0 0 flowid 1:1 \
   action mirred egress redirect dev ifb0

$TC qdisc add dev eth2 root handle 1: htb default 30 r2q 100

$TC class add dev eth2 parent 1: classid 1:1 htb rate 125mbps mtu 9000
$TC class add dev eth2 parent 1:1 classid 1:10 htb rate 40mbps ceil 
125mbps mtu 9000 prio 0
$TC class add dev eth2 parent 1:1 classid 1:30 htb rate 85mbps ceil 
125mbps mtu 9000 prio 1


$TC filter add dev eth2 parent 1: protocol ip prio 1 u32 match ip dst 
192.168.5.141/32 classid 1:10


I ran a test in which all 10 clients send/receive packets to/from the 
server simultaneously. But
Client 1 only got 20mbps bandwidth for sending, and 38mpbs for 
receiving. If I limit the rate of
both classes 1:1 to 60mbps instead of 125mbps, Client 1 got 39mbps for 
sending, and 40mbps for

receiving.

I am not sure what might cause this. Is it because TC doesn't work well 
when network is congested?

Or my script is not right?

Thanks a lot,
william



___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] TC (HTB) doesn't work well when network is congested?

2007-10-25 Thread William Xu

Thanks, Peter,

So what you means is that network congestion caused my problem. I test 
the bandwidth
between client1 and server using iperf, SEND and RECV are both 122MB/s.  
I tried

different values for the total bandwidth, and I got the following numbers:

total bandwidth 120MB/sclient1 SEND :10MB/s RECV :39MB/s
total bandwidth 110MB/sclient1 SEND :16MB/s RECV :40MB/s
total bandwidth 100MB/sclient1 SEND :30MB/s RECV :38MB/s
total bandwidth   90MB/sclient1 SEND :39MB/s RECV :40MB/s
total bandwidth   80MB/sclient1 SEND :39MB/s RECV :40MB/s
total bandwidth   70MB/sclient1 SEND :40MB/s RECV :40MB/s
total bandwidth   60MB/sclient1 SEND :40MB/s RECV :40MB/s

So TC works well as long as total bandwidth is below 90MB/s, which is 
about 70% of the
wise speed. Is it possible that I can use the full bandwidth (122MB/s) 
in my script?


william

Peter Rabbitson wrote:


William Xu wrote:


Hi,

I have a server and ten clients in a Gigabit network. The server has 
125mbps network bandwidth.
I want that the server has 40Mbps bandwidth reserved for client 1 (IP 
192.168.5.141), and the

rest bandwidth is for all other clients.

snip

I ran a test in which all 10 clients send/receive packets to/from the 
server simultaneously. But
Client 1 only got 20mbps bandwidth for sending, and 38mpbs for 
receiving. If I limit the rate of
both classes 1:1 to 60mbps instead of 125mbps, Client 1 got 39mbps 
for sending, and 40mbps for

receiving.

I am not sure what might cause this. Is it because TC doesn't work 
well when network is congested?

Or my script is not right?



No network will be able to operate at its theoretical maximum. In the 
case of a gigabit network you will be lucky to get consistent 120mbps, 
and it heavily depends on the hardware quality, and the number of 
switches in between. So what you are doing is oversaturating the link, 
the ACK packets can not get through, your speed drops due to 
delays/retransmissions. Perform a test with only two systems sending 
stuff to each other to see what is the actual bandwidth you can hope 
for, and use that number instead of 125mbps.



___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc