On 07/11/2012 08:54 AM, Eric Dumazet wrote:
On Wed, 2012-07-11 at 08:43 -0700, Ben Greear wrote:
On 07/11/2012 08:25 AM, Eric Dumazet wrote:
On Wed, 2012-07-11 at 08:16 -0700, Ben Greear wrote:
I haven't read your patch in detail, but I was wondering if this feature
would cause trouble for applications that are servicing many sockets at once
and so might take several ms between handling each individual socket.
Well, this patch has no impact for such applications. In fact their
send()/write() will return to userland faster than before (for very
large send())
Maybe I'm just confused. Is your patch just mucking with
the queues below the tcp xmit queues? From the patch description
I was thinking you were somehow directly limiting the TCP xmit
queues...
I dont limit tcp xmit queues. I might avoid excessive autotuning.
If you are just draining the tcp xmit queues on a new/faster
trigger, then I see no problem with that, and no need for
a per-socket control.
Thats the plan : limiting numer of bytes in Qdisc, not number of bytes
in socket write queue.
Thanks for the explanation.
Out of curiosity, have you tried running multiple TCP streams
with different processes driving each stream, where each is trying
to drive, say, 700Mbps bi-directional traffic over a 1Gbps link?
Perhaps with 50ms of latency generated by a network emulator.
This used to cause some extremely high latency
due to excessive TCP xmit queues (from what I could tell),
but maybe this new patch will cure that.
I'll re-run my tests with your patch eventually..but too bogged
down to do so soon.
Thanks,
Ben
--
Ben Greear <[email protected]>
Candela Technologies Inc http://www.candelatech.com
_______________________________________________
Codel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/codel