I have a router with 3 network interfaces like in the following ASCII
diagram below. All interfaces are 100mbit. There is tcp traffic being
sent from net1 to net3 and from net2 to net3 and the tcp connections
consume as much bandwidth as possible. There is a pfifo queue on the
egress interface eth0 of the core router with a limit of 10 packets.


net1 --> (eth1) router (eth0) -> net3 
                (eth2)
                  ^
                  |
                net 2

I police traffic on the edge of net1 to 48.4375 Mbit and shape the
traffic on exit of net 2 to 48.4375 Mbit. There are no packets in the
queue of the egress interface eth0 of the router at any stage. (every
packet is enqueued by pfifo_enqueue() to an empty queue. I have
confirmed this by adding adding a counter in sch_fifo.c that is
incremented every time there is a packet in the queue when a new packet
is enqueued.) The delay is at a maximum of 2ms. 

When I increase the policing rate and shaping rates to 48.4687. The
combined increase is 31.2 kbit which is very small. there are some
packets queued for a short period and some dropped which clears the
queue. The maximum number of packets dropped was 20 per second. But the
delay goes up to 30ms.  

check out the graphs at
http://frink.nuigalway.ie/~jlynch/queue/


I cant seem to explain this. Even if the queue was full all the time and
each packet was of maximum size, the delay imposed by queueing should be
a maximum of 10 * 1500 * 8 /100,000,000 which equals 1ms. 

How can so much delay be added by such a small increase in the
throughput coming from net1 and net2 ? 

I would appreciate if someone could explain it to me.

Btw im using a stratum 1 NTP server on the same LAN to ensure
measurement accuracy.


Jonathan

_______________________________________________
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

Reply via email to