On Sun, 2010-09-12 at 08:27 -0700, Tom Eastep wrote: 
> 
> Then I have no idea why you are seeing these inaccuracies.

OK.  Maybe it's a bug somewhere in the kernel.

> Which is what simple TC does. And it optionally polices ingress, if an
> IN-BANDWIDTH is given.

Ahhh.  Yes, of course.  I was mixing/combining the two -- not realizing
that the prioritization is happening on the upstream packets and the
IN-BANDWIDTH was optionally for policing the downstream bandwidth.

So in doing a test, it doesn't seem to be having the desired
effect.  :-(

I explicitly put ICMP into band 1 in tcpri:

#BAND           PROTO   PORT(S)         ADDRESS         INTERFACE       HELPER
1               -       -               -               -               sip
1               icmp    -               -               -

And then reloaded the firewall (to reset some counters and such) and
then, while pinging the upstream router, scp'd a (3940KB) file from the
lan to a machine out on the Internet.

The classes and counters after the copy showed:

class prio 1:1 parent 1: leaf 11: 
 Sent 4814 bytes 53 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
class prio 1:2 parent 1: leaf 12: 
 Sent 26849 bytes 79 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
class prio 1:3 parent 1: leaf 13: 
 Sent 4262752 bytes 2884 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 

As you can see, the 4MB of file was sent in band 3, some unknown traffic
in band 2 and the pings in band 1.  However the ping statistics at the
end were:

35 packets transmitted, 35 received, 0% packet loss, time 34003ms
rtt min/avg/max/mdev = 5.974/408.201/666.489/194.407 ms

where the transfer was done during ping packets 4-34, which account for
the high average and max RTT.

For posterity, the rest of the TC configuration:

# tc -s qdisc ls dev eth0.1
qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 4367741 bytes 3325 pkt (dropped 0, overlimits 0 requeues 0) 
rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfq 11: parent 1:1 limit 127p quantum 1875b perturb 10sec 
Sent 8578 bytes 101 pkt (dropped 0, overlimits 0 requeues 0) 
rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfq 12: parent 1:2 limit 127p quantum 1875b perturb 10sec 
Sent 95699 bytes 338 pkt (dropped 0, overlimits 0 requeues 0) 
rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfq 13: parent 1:3 limit 127p quantum 1875b perturb 10sec 
Sent 4262894 bytes 2885 pkt (dropped 0, overlimits 0 requeues 0) 
rate 0bit 0pps backlog 0b 0p requeues 0 

# tc -s filter ls dev eth0.1
filter parent 1: protocol all pref 49150 fw 
filter parent 1: protocol all pref 49150 fw handle 0x3 classid 1:3 
filter parent 1: protocol all pref 49151 fw 
filter parent 1: protocol all pref 49151 fw handle 0x2 classid 1:2 
filter parent 1: protocol all pref 49152 fw 
filter parent 1: protocol all pref 49152 fw handle 0x1 classid 1:1 

And my mangle table:

Chain PREROUTING (policy ACCEPT 5627 packets, 4556K bytes)
 pkts bytes target     prot opt in     out     source               destination 
        
 5133 4436K CONNMARK   all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        connmark match !0x0/0xff00 CONNMARK restore mask 0xff00 
   84 23805 routemark  all  --  eth0.1 *       0.0.0.0/0            0.0.0.0/0   
        mark match 0x0/0xff00 
   83  5256 routemark  all  --  ppp0   *       0.0.0.0/0            0.0.0.0/0   
        mark match 0x0/0xff00 
 2008  192K tcpre      all  --  eth0.1 *       0.0.0.0/0            0.0.0.0/0   
        
  157 11680 tcpre      all  --  ppp0   *       0.0.0.0/0            0.0.0.0/0   
        
  327 90334 tcpre      all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        mark match 0x0/0xff00 

Chain INPUT (policy ACCEPT 350 packets, 37188 bytes)
 pkts bytes target     prot opt in     out     source               destination 
        

Chain FORWARD (policy ACCEPT 5249 packets, 4508K bytes)
 pkts bytes target     prot opt in     out     source               destination 
        
 5249 4508K MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        MARK and 0x0 
 5249 4508K tcfor      all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        

Chain OUTPUT (policy ACCEPT 249 packets, 18487 bytes)
 pkts bytes target     prot opt in     out     source               destination 
        
  136 10015 CONNMARK   all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        connmark match !0x0/0xff00 CONNMARK restore mask 0xff00 
  113  8472 tcout      all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        mark match 0x0/0xff00 

Chain POSTROUTING (policy ACCEPT 5498 packets, 4526K bytes)
 pkts bytes target     prot opt in     out     source               destination 
        
 5498 4526K tcpost     all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        

Chain routemark (2 references)
 pkts bytes target     prot opt in     out     source               destination 
        
   84 23805 MARK       all  --  eth0.1 *       0.0.0.0/0            0.0.0.0/0   
        MARK xset 0x100/0xffffffff 
   83  5256 MARK       all  --  ppp0   *       0.0.0.0/0            0.0.0.0/0   
        MARK xset 0x200/0xffffffff 
  167 29061 CONNMARK   all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        mark match !0x0/0xff00 CONNMARK save mask 0xff00 

Chain tcfor (1 references)
 pkts bytes target     prot opt in     out     source               destination 
        

Chain tcout (1 references)
 pkts bytes target     prot opt in     out     source               destination 
        

Chain tcpost (1 references)
 pkts bytes target     prot opt in     out     source               destination 
        
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0   
        helper match "sip" MARK xset 0x1/0xffffffff 
  230 16120 MARK       icmp --  *      *       0.0.0.0/0            0.0.0.0/0   
        MARK xset 0x1/0xffffffff 

Chain tcpre (3 references)
 pkts bytes target     prot opt in     out     source               destination 
        

Are my expectations (that the pings should be low latency) not in line
with what Simple TC does?

FWIW, there was practically no download bandwidth usage (nowhere near
the 15Mbps is capable of) to account for the high latency either.

b.

Attachment: signature.asc
Description: This is a digitally signed message part

------------------------------------------------------------------------------
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
_______________________________________________
Shorewall-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shorewall-users

Reply via email to