On Thu, Jul 12, 2012 at 12:51 AM, Eric Dumazet <[email protected]> wrote: > On Thu, 2012-07-12 at 00:37 -0700, David Miller wrote: >> From: Eric Dumazet <[email protected]> >> Date: Thu, 12 Jul 2012 09:34:19 +0200 >> >> > On Thu, 2012-07-12 at 01:49 +0200, Eric Dumazet wrote: >> > >> >> The 10Gb receiver is a net-next kernel, but the 1Gb receiver is a 2.6.38 >> >> ubuntu kernel. They probably have very different TCP behavior. >> > >> > >> > I tested TSQ on bnx2x and 10Gb links. >> > >> > I get full rate even using 65536 bytes for >> > the /proc/sys/net/ipv4/tcp_limit_output_bytes tunable >> >> Great work Eric. > > Thanks ! > This is indeed great work! A couple of comments...
Do you know if there are are any qdiscs that function less efficiently when we are restricting the number of packets? For instance, will HTB work as expected in various configurations? One extension to this work be to make the limit dynamic and mostly eliminate the tunable. I'm thinking we might be able to correlate the limit to the BQL limit of the egress queue for the flow it there is one. Assuming all work conserving qdiscs the minimal amount of outstanding host data for a queue could be associated with the BQL limit of the egress NIC queue. We want to minimize the outstanding data so that: sum(data_of_tcp_flows_share_same_queue) > bql_limit_for _queue So this could imply a per flow limit of: tcp_limit = max(bql_limit - bql_inflight, one_packet) For a single active connection on a queue, the tcp_limit is equal to the BQL limit. Once the BQL limit is hit in the NIC, we only need one packet outstanding per flow to maintain flow control. For fairness, we might need "one_packet" to actually be max GSO data. Also, this disregards any latency of scheduling and running the tasklet, that might need to be taken into account also. Tom _______________________________________________ Codel mailing list [email protected] https://lists.bufferbloat.net/listinfo/codel
