Re: [LARTC] HTB/SFQ dequeueing in pairs

2004-01-27 Thread Andy Furniss
I got this reply from don & would rather answer on list so more people 
have a chance to correct any of my misconceptions :-)

[this message off list - feel free to forward it, but leave out my address]

  I wanted to see where from a slot the packets got dropped when the queue
  was full. (e)sfq drops from the longest slot to make space for an
  incoming packet, so it's not tail drop as such, but the results show me
  it does drop from the tail of the slot - which if you are trying to
  shape inbound, is a PITA as tcp "slow" start grows exponentially and
What's PITA ?
Pain in the arse.

  overflows into my ISP/telecos buffer, causing a latency bump. I think it
  would be alot nicer if It head dropped to make the sender go into
  congestion control quicker.
The fact that the queue grows means that the packets are delayed, and
that's supposed to influence the speed of tcp.
Yes but as I understsnd it during slow start the senders cwin doubles 
per rtt and doesn't stop until it's sent enough to fill my advertised 
window (which linux grows to 32k quite quickly) or a packet is lost and 
three dup acks are recieved, at which time it goes into congestion 
controll and shrinks it's cwin.

Head drop seems absurd, since most of the packets behind the dropped
packet will be wasted - the tcp on the other side will only keep a few
packets past the one that's missing.
I think the opposite is the case, the fact the packet is tail dropped 
means I don't start sending dups for the time it takes to get to the 
head of the queue. The sender meanwhile is transmitting alot of packets, 
most of which I drop after they have already used up some of my bandwidth.

  I noticed that the packets were being released in pairs, which probably
  doesn't help either.
I don't see that it should hurt.
The sender during slow start is increasing exponentally per ack 
recieved, it would be nicer to space them out.

How big are the packets?  Are there other packets in other buckets or
in other queues?  Also how are the packets being generated?
I'd expect for something like ftp where you generate a steady stream
of large packets, they would be released one at a time, since your
quantum is approx the size of one large packet.
On the other hand if you generate two small packets at a time then
maybe the queue is not the bottleneck.
It could also be something in the device driver.
You can probably solve this problem by adding printk's to tell you
when various things happen.
This was a test - the packets are big and there is no other traffic. I 
am in the early days of experimenting. In real use I would be using 
something based on alexander clouters jdg-script with his RED settings - 
 but even if I throttle to 65% down, with my "low" bandwidth, running 
a bittorrent - or just browsing heavy jpg sites will baulk my latency 
too much to play half life. Though most users may be quite happy with 
the results. Whatever queue I use for downstream is having to live 
behind a fifo whose bandwidth isn't that much more than what I would 
like to shape to, so may not behave as the text book says. If I had 2M 
down, I would not have a problem - what is a 300ms bump would only be 
50ms and I could live with that.

Andy.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] HTB/SFQ dequeueing in pairs

2004-01-26 Thread Andy Furniss
I set up a little test to see what the behaviour of (e)sfq was - because 
I couldn't work it out from the source :-) .

I wanted to see where from a slot the packets got dropped when the queue 
was full. (e)sfq drops from the longest slot to make space for an 
incoming packet, so it's not tail drop as such, but the results show me 
it does drop from the tail of the slot - which if you are trying to 
shape inbound, is a PITA as tcp "slow" start grows exponentially and 
overflows into my ISP/telecos buffer, causing a latency bump. I think it 
would be alot nicer if It head dropped to make the sender go into 
congestion control quicker.

However this is not the reason for this post. I tested by capturing with 
tcpdump before and after the queue.
I noticed that the packets were being released in pairs, which probably 
doesn't help either.
I assume it is htb that calls esfq to dequeue a packet - but I don't know.

For the test my DWIFLIMIT bandwidth was set at 51kbit/s which is 10% of 
my bandwidth.
My mtu is set at 1478 as it's slightly more efficient for adsl using 
pppoa/vcmux in the UK.

I used -

$TC class add dev $DWIF parent 1:2 classid 1:21 htb rate 
$[$DWIFLIMIT/2]kbit \
ceil ${DWIFLIMIT}kbit burst 0b cburst 0b mtu 1478 quantum 1478 
prio 1

$TC qdisc add dev $DWIF parent 1:21 handle 21: esfq perturb 0 hash 
classic limit 10

This is part of tc -s -d class show dev imq1

class htb 1:21 parent 1:2 leaf 21: prio 1 quantum 1478 rate 25Kbit ceil 
51Kbit burst 1507b/8 mpu 0b cburst 1540b/8 mpu 0b level 0

Is there anything obvious here that would cause the packets to dequeue 
in pairs.

TIA

Andy.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/