bits/sec 49.881 ms (5%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 6.0002-6.0511 sec 40.0 KBytes 6.44 Mbits/sec 50.895 ms (5.1%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 7.0002-7.0501 sec 40.0 KBytes 6.57 Mbits/sec 49.889 ms (5%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 8.0002-8.0481 sec
,obl/obu=0/0) (6.396
>> ms/1635289683.794338)
>> [ 1] 8.00-9.00 sec 40.0 KBytes 328 Kbits/sec 10/0 0
>> 14K/5329 us 8
>> [ 1] 8.00-9.00 sec S8-PDF:
>> bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1
>> (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0
On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast
> wrote:
>
>> Hi All,
>>
>> Sorry for the spam. I'm trying to support a meaningful TCP message latency
>> w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I
On 5/18/21 1:00 AM, Stephen Hemminger wrote:
>
> The Azure network driver (netvsc) also does not have BQL. Several years ago
> I tried adding it but it benchmarked worse and there is the added complexity
> of handling the accelerated networking VF path.
>
Note that NIC with many TX queues ma
On 8/27/19 10:04 PM, Stephen Hemminger wrote:
> On Mon, 26 Aug 2019 09:35:14 +0200
> Toke Høiland-Jørgensen wrote:
>
>> Turns out that with the "earliest departure time" support in sched_fq,
>> it is now possible to write a shaper in eBPF, thus avoiding the global
>> qdisc lock in sched_htb. Th
On 06/21/2018 02:22 AM, Toke Høiland-Jørgensen wrote:
> Dave Taht writes:
>
>> Nice war story. I'm glad this last problem with the fq_codel wifi code
>> is solved
>
> This wasn't specific to the fq_codel wifi code, but hit all WiFi devices
> that were running TCP on the local stack. Which woul
7;),
this is essential for a correct MSG_ZEROCOPY implementation,
because userspace cannot call close(fd) before receiving
zerocopy signals even when the connection is aborted.
Fixes: f214f915e7db ("tcp: enable MSG_ZEROCOPY")
Signed-off-by: Soheil Hassas Yeganeh
Signed-of
On Thu, 2017-11-30 at 09:51 -0500, Neal Cardwell wrote:
> On Thu, Nov 30, 2017 at 5:24 AM, Eric Dumazet > wrote:
> > I agree that TCP itself should generate ACK smarter, on receivers
> > that
> > are lacking GRO. (TCP sends at most one ACK per GRO packets, that
> >
On Thu, 2017-11-30 at 14:04 +0100, Mikael Abrahamsson wrote:
> On Thu, 30 Nov 2017, Eric Dumazet wrote:
>
> > I agree that TCP itself should generate ACK smarter, on receivers
> > that
> > are lacking GRO. (TCP sends at most one ACK per GRO packets, that
> > is why
I agree that TCP itself should generate ACK smarter, on receivers that
are lacking GRO. (TCP sends at most one ACK per GRO packets, that is
why we did not feel an urgent need for better ACK generation)
It is actually difficult task, because it might need an additional
timer, and we were reluctant
On Wed, 2017-11-29 at 15:59 -0800, Stephen Hemminger wrote:
> On Wed, 29 Nov 2017 10:41:41 -0800
> Dave Taht wrote:
>
> > On Wed, Nov 29, 2017 at 10:21 AM, Juliusz Chroboczek
> > wrote:
> > > > The better solution would of course be to have the TCP peeps
> > > > change the
> > > > way TCP works
I had the honor to attend Kathleen presentation at Google ;)
I then worked on making sure TCP TS TSval would use 1ms units,
regardless of CONFIG_HZ option in the kernel, since apparently some
distros/devices use HZ=250 or even HZ=100
This should be in linux-4.13 when released.
https://git.kerne
On Thu, 2017-04-13 at 20:12 -0700, Aaron Wood wrote:
> When I was testing with my iPerf changes, I realized that the sch_fq
> pacing (which in iperf is set via setsockopt()), is pacing at a
> bandwidth that's set at a pretty low level in the stack (which makes
> sense). This is different from the
On Thu, 2017-03-16 at 22:14 +, xnor wrote:
> Hello,
>
> I have a tl-wdr3600 router with openwrt (linux 3.18, config at [1]) and
> want to shape ingress traffic.
>
> The WAN port, interface eth0.2, is connected to a cable modem.
> I use the following script:
>
> tc qdisc add dev eth0.2 handl
On Thu, 2017-03-16 at 11:52 -0400, Michael Richardson wrote:
> Dave Taht wrote:
> > Is it faster to execute 17 bpf vm instructions on (nearly) every
> > packet, or to use all that old stuff?
>
> > bpf example for the babel protocol:
>
> I have no data for you. Andrew McGregor might
> Best Regards
> Sebastian
>
>
>
> > On Jan 27, 2017, at 15:40, Eric Dumazet wrote:
> >
> > On Thu, 2017-01-26 at 23:55 -0800, Dave Täht wrote:
> >>
> >> On 1/26/17 11:21 PM, Hans-Kristian Bakke wrote:
> >>> Hi
> >>>
>
On Thu, 2017-01-26 at 23:55 -0800, Dave Täht wrote:
>
> On 1/26/17 11:21 PM, Hans-Kristian Bakke wrote:
> > Hi
> >
> > After having had some issues with inconcistent tso/gso configuration
> > causing performance issues for sch_fq with pacing in one of my systems,
> > I wonder if is it still recom
upported, but we would generally install FQ on the
bonding device, and let TSO enabled on the bonding.
This is because setting timers is expensive, and our design choices for
pacing tried hard to avoid setting timers for every 2 packets sent (as
in 1-MSS packets) ;)
( https://lwn.net/Articles/564978/ )
; fcoe-mtu: off [fixed]
> tx-nocache-copy: off
> loopback: off [fixed]
> rx-fcs: off [fixed]
> rx-all: off [fixed]
> tx-vlan-stag-hw-insert: off [fixed]
> rx-vlan-stag-hw-parse: off [fixed]
> rx-vlan-stag-filter: off [fixed]
> l2-fwd-offload: off [fixed]
> busy-poll: off [f
set
> # CONFIG_NO_HZ is not set
> # CONFIG_HZ_100 is not set
> CONFIG_HZ_250=y
> # CONFIG_HZ_300 is not set
> # CONFIG_HZ_1000 is not set
> CONFIG_HZ=250
> CONFIG_MACHZ_WDT=m
>
>
>
> On 26 January 2017 at 21:41, Eric Dumazet
> wrote:
>
> Can
Is there any CPU bottleneck?
>
> pacing causing this sort of problem makes me thing that the
> CPU either can't keep up or that something (Hz setting type of
> thing) is delaying when the CPU can get used.
>
> It's not clear fro
pacing causing this sort of problem makes me thing that the
> CPU either can't keep up or that something (Hz setting type of
> thing) is delaying when the CPU can get used.
>
> It's not clear from the posts if the probl
Nothing jumps on my head.
We use FQ on links varying from 1Gbit to 100Gbit, and we have no such
issues.
You could probably check on the server the TCP various infos given by ss
command
ss -temoi dst
pacing rate is shown. You might have some issues, but it is hard to say.
On Thu, 2017-01-26
relying on this), so adding FQ/pacing probably helps,
even with Cubic.
>
> On 26 January 2017 at 00:53, Eric Dumazet
> wrote:
> On Thu, 2017-01-26 at 00:47 +0100, Hans-Kristian Bakke wrote:
> >
> >
> > I did record the qdisc sett
requeues 0)
> backlog 101616b 3p requeues 0
> 16 flows (15 inactive, 1 throttled), next packet delay 351937 ns
> 0 gc, 0 highprio, 58377 throttled, 12761 ns latency
>
>
Looks good, although latency seems a bit high, thanks !
>
>
> On 26 January 2017 at 00:
t's simplicity and can be
> found here: https://github.com/hkbakke/tc-gen
>
>
> The VPN connection is terminated in netherlands on a gigabit VPN
> server with around 35 ms RTT.
>
> On 26 January 2017 at 00:33, Eric Dumazet
> wrote:
>
> On Thu,
On Thu, 2017-01-26 at 00:04 +0100, Hans-Kristian Bakke wrote:
> I can do that. I guess I should do the capture from tun1 as that is
> the place that the tcp-traffic is visible? My non-virtual nic is only
> seeing OpenVPN encapsulated UDP-traffic.
>
But is FQ installed at the point TCP sockets ar
On Wed, 2017-01-25 at 23:23 +0100, Steinar H. Gunderson wrote:
> On Wed, Jan 25, 2017 at 02:12:05PM -0800, Eric Dumazet wrote:
> > Well, pacing is optional in sch_fq.
> >
> > Only the FQ part is not optional.
>
> Yeah, but who cares about the FQ part for an end host
On Wed, 2017-01-25 at 23:06 +0100, Steinar H. Gunderson wrote:
> On Wed, Jan 25, 2017 at 05:01:04PM -0500, Neal Cardwell wrote:
> > Nope, BBR needs pacing to work correctly, and currently fq is the only
> > Linux qdisc that implements pacing.
>
> I really wish sch_fq was renamed sch_pacing :-) And
I do not know any particular issues with FQ in VM
If you have a recent tc binary (iproute2 package) you can get some
infos, as mentioned in this commit changelog :
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=fefa569a9d4bc4b7758c0fddd75bb0382c95da77
$ tc -s qd sh d
On Sun, 2016-12-04 at 09:44 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> > Wait a minute. If you use fq on the receiver, then maybe your old debian
> > kernel did not backport :
> >
> > https://git.kernel.org/
On Sat, 2016-12-03 at 15:15 -0800, Eric Dumazet wrote:
> On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
> > On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> > > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > &
On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > would help this precise workload
> >
> > https://git.kernel.org/
On Sat, 2016-12-03 at 15:02 -0800, Eric Dumazet wrote:
> On Sat, 2016-12-03 at 14:55 -0800, Eric Dumazet wrote:
>
> > Perfect.
> >
> > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > would help this precise workload
>
> Also
On Sat, 2016-12-03 at 14:55 -0800, Eric Dumazet wrote:
> Perfect.
>
> Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> would help this precise workload
Also it appears the sender uses a lot of relatively small segments (8220
bytes at a time), with PSH, so GRO
On Sat, 2016-12-03 at 23:13 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 01:50:37PM -0800, Eric Dumazet wrote:
> >> Note, the tcpdump is done at the receiver. I don't know if this changes the
> >> analysis.
> > If you have access to the receive
On Sat, 2016-12-03 at 22:34 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 01:07:40PM -0800, Eric Dumazet wrote:
> > What I meant was that we receive ACKS in bursts, with huge gaps between
> > them.
>
> Note, the tcpdump is done at the receiver. I don'
On Sat, 2016-12-03 at 22:26 +0200, Jonathan Morton wrote:
> > On 3 Dec, 2016, at 22:20, Eric Dumazet wrote:
> >
> > Huge ACK decimation it seems.
>
> That extract does not show ACK decimation. It shows either jumbo
> frames or offload aggregation in a send burst,
On Sat, 2016-12-03 at 20:13 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
> > Thanks for the report, Steinar. This is the first report we've had
> > like this, but it would be interesting to find out what's going on.
> >
> > Even if you don't h
On Fri, 2016-12-02 at 23:40 +0100, Steinar H. Gunderson wrote:
> On Fri, Dec 02, 2016 at 05:22:23PM -0500, Neal Cardwell wrote:
> > Of course, if we find important use cases that don't work with BBR, we will
> > see what we can do to make BBR work well with them.
>
> I have one thing that I _wonde
On Thu, 2016-10-27 at 10:33 -0700, Dave Taht wrote:
> At the moment my biggest beef with BBR is that it ignores ECN entirely
> (and yet negotiates it).
Note that switching cubic to any other CC like BBR is allowed at any
time, way after ECN was negotiated.
So BBR can not solve the issue you men
On Thu, 2016-10-27 at 19:04 +0200, Steinar H. Gunderson wrote:
> On Fri, Oct 21, 2016 at 10:47:26AM +0200, Steinar H. Gunderson wrote:
> > As a random data point, I tried a single flow from my main server in .no
> > to my backup server in .nl and compared CUBIC (with sch_fq) to BBR
> > (naturally
On Fri, 2016-10-21 at 13:03 +0200, Steinar H. Gunderson wrote:
> On Fri, Oct 21, 2016 at 03:52:24AM -0700, Eric Dumazet wrote:
> > Ok, you could try 12 MB instead of 8MB to eventually reach line rate.
> >
> > net/ipv4/tcp_rmem = 4096 873814 1200
> > net/ipv4/tcp_
On Fri, 2016-10-21 at 12:47 +0200, Steinar H. Gunderson wrote:
> On Fri, Oct 21, 2016 at 12:42:17PM +0200, Steinar H. Gunderson wrote:
> > Perhaps it's time to kill them, but I don't know what the defaults are :-)
>
> If it matters (for autotuning), the host has 64 GB of physical RAM.
I am workin
On Fri, 2016-10-21 at 12:42 +0200, Steinar H. Gunderson wrote:
> On Fri, Oct 21, 2016 at 03:28:01AM -0700, Eric Dumazet wrote:
> > Could you provide /proc/sys/net/ipv4/tcp_rmem
> > and /proc/sys/net/ipv4/tcp_wmem values on your server ?
>
> I see I have old sysctl.conf setti
On Fri, 2016-10-21 at 10:47 +0200, Steinar H. Gunderson wrote:
> On Fri, Sep 16, 2016 at 02:04:37PM -0700, Dave Taht wrote:
> > I'm really looking forward to trying them out and reading the upcoming
> > paper.
> >
> > https://patchwork.ozlabs.org/patch/671069/
>
> As a random data point, I tried
Yeah, team is excited !
But really I am a small contributor in this work (mostly reviewing patches)
Neal, Van, Yuchung and Soheil really worked hard on it.
On Fri, Sep 16, 2016 at 2:11 PM, Steinar H. Gunderson
wrote:
> On Fri, Sep 16, 2016 at 02:04:37PM -0700, Dave Taht wrote:
>> I'm really lo
On Tue, 2015-07-28 at 16:49 +0200, Eric Dumazet wrote:
> On Tue, 2015-07-28 at 07:31 -0700, Simon Barber wrote:
> > The issue is that Codel tries to keep the delay low, and will start
> > dropping when sojourn time grows above 5ms for 100ms. For longer RTT links
> > more
On Tue, 2015-07-28 at 16:44 +0200, Dave Taht wrote:
>
> 4) setting the default qdisc to be fq_codel in the sysctl works on
> most dynamically created devices, but does not work on devices that
> come up at boot time before the sysctls have a chance to run and the
> modprobes to complete. This is
On Tue, 2015-07-28 at 07:31 -0700, Simon Barber wrote:
> The issue is that Codel tries to keep the delay low, and will start
> dropping when sojourn time grows above 5ms for 100ms. For longer RTT links
> more delay is necessary to avoid underutilizing the link. This is due to
> the multiplicativ
On Tue, 2015-07-28 at 07:11 -0700, Simon Barber wrote:
> The main danger is the negative effects on performance of using Codel. You
> may experience low throughput on high RTT links.
Really ? I've never seen this, unless you mess with codel qdisc
attributes maybe.
(Some guys seem to not really u
On Tue, 2015-07-28 at 15:50 +0200, Juliusz Chroboczek wrote:
> > What distro are you on?
>
> Debian. (One machine stable, one testing.)
>
> > There's a sysctl to set the default qdisc: net.core.default_qdisc.
>
> $ sysctl -a | grep default_qdisc
> net.core.default_qdisc = pfifo_fast
>
On Sun, 2015-06-21 at 17:04 +0200, Juliusz Chroboczek wrote:
> Do you have any concrete examples where this has been found to yield
> better results than what the kernel can do on its own? I'm probably
> missing something, but my intuition is that conversational protocols
> should be naturall
On Fri, 2015-06-19 at 07:07 +0300, Jonathan Morton wrote:
> > On 19 Jun, 2015, at 05:47, Juliusz Chroboczek
> wrote:
> >
> >> I am curious if anyone has tried this new socket option in
> appropriate apps,
> >
> > I'm probably confused, but I don't see how this is different from
> setting SO_SNDB
On Wed, 2015-06-17 at 08:49 +0300, Jonathan Morton wrote:
> 2) The active flow counter is now an atomic-access variable. This is really
> just an abundance of caution.
Certainly not needed.
Qdisc enqueue() & dequeue() are done under qdisc spinlock protection.
_
On Tue, 2015-06-16 at 10:53 -0700, Dave Taht wrote:
> But guidelines on how to configure it in applications are missing.
Well, you know the optimal rate -> set it. It is that simple.
If not, leave this to TCP stack.
> As
> are when where and how to implement it in DCs, handheld clients,
> inte
On Tue, 2015-06-16 at 10:22 -0700, Dave Taht wrote:
> Take samba as another potential example. I commonly see this
> increasing the SO_SNDBUF to a given value, but I am not sure if this
> is the right thing anymore. As samba is commonly used for filesharing
> (and things that take locks and do dat
On Thu, 2015-05-07 at 16:09 -0700, Dave Taht wrote:
> Recently I did some tests of 450+ flows (details on the cake mailing
> list) against sch_fq which got hopelessly buried (1 packets in
> queue). cake and fq_pie did a lot better.
Seriously, comparing sch_fq against fq_pie or fq_codel or cak
Hi Bill
I am confused, because it looks like this is already there in the kernel
Look for SO_MAX_PACING_RATE setsockopt() ?
It works for all sockets, not only TCP ;)
http://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=62748f32d501f5d3712a7c372bbb92abc7c62bc7
Instead of doin
On Fri, 2015-04-24 at 09:31 -0700, Rick Jones wrote:
> > netperf -t TCP_STREAM" uses a default size of 16384 bytes per sendmsg.
>
> Under Linux at least, and only because that is the default initial value
> for SO_SNDBUF for a TCP socket (via tcp_wmem).
>
> More generally, the default send size
On Thu, 2015-04-23 at 21:40 -0700, Dave Taht wrote:
> and of course, after writing the previous email, I go reading the
> original commit for this option. Yea, that is a huge increase in
> context switches...
>
> https://lwn.net/Articles/560082/
>
> ... but totally worth it for many apps that can
On Thu, 2015-04-23 at 21:37 -0700, Dave Taht wrote:
> On Wed, Apr 22, 2015 at 2:05 PM, Rick Jones wrote:
> > On 04/22/2015 02:02 PM, Eric Dumazet wrote:
> >>
> >> Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
>
> I will argue that that would
On Thu, 2015-04-23 at 12:17 +0200, renaud sallantin wrote:
> Hi,
> ...
>
> We did an extensive work on the Pacing in slow start and notably
> during a large IW transmission.
>
> Benefits are really outstanding! Our last implementation is just a
> slight modification of FQ/pacing
> * Sal
Wait, this is a 15 years old experiment using Reno and a single test
bed, using ns simulator.
Naive TCP pacing implementations were tried, and probably failed.
Pacing individual packet is quite bad, this is the first lesson one
learns when implementing TCP pacing, especially if you try to drive a
On Wed, 2015-04-22 at 15:20 -0700, Simon Barber wrote:
> Wouldn't the LOWAT setting be much easier for applications to use if it was
> set in estimated time (ie time it will take to deliver the data) rather
> than bytes?
Sure, but you have all the info to infer one from the other.
Note also TCP
On Wed, 2015-04-22 at 14:05 -0700, Rick Jones wrote:
> On 04/22/2015 02:02 PM, Eric Dumazet wrote:
> > Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
>
> Don't go telling Dave about that, he wants me to put too much into
> netperf as it is!-)
Note
On Wed, 2015-04-22 at 23:07 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 22, 2015 at 02:02:32PM -0700, Eric Dumazet wrote:
> > Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
>
> But this is only for when your data could change underway, right?
> Like
Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
On Wed, 2015-04-22 at 12:28 -0700, Dave Taht wrote:
> SO_SNDLOWAT or something similar to it with a name I cannot recall,
> can be useful.
>
> On Wed, Apr 22, 2015 at 12:10 PM, Hal Murray wrote:
> >
> >> As I understand it (I
On Wed, 2015-04-22 at 10:53 -0700, Jim Gettys wrote:
> Actually, fq_codel's sparse flow optimization provides a pretty strong
> incentive for pacing traffic.
>
>
> If your TCP traffic is well paced, and is running at a rate below that
> of the bottleneck, then it will not build a queue.
>
>
>
On Wed, 2015-04-22 at 19:28 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> On 04/22/2015 07:16 PM, Eric Dumazet wrote:
> >
> > sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> > queues : Smaller bursts, less self inflicted drops.
>
> This I understand
On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> FQ gives you flow isolation.
So does fq_codel.
sch_fq adds *pacing*, which in itself has benefits, regardless of fair
queues : Smaller bursts, less self inflicted drops.
If flows are competing, this is the role of Congestion C
On Wed, 2015-04-22 at 17:59 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +, luca.muscarie...@orange.com wrote:
> > BTW if a paced flow from Google shares a bloated buffer with a non paced
> > flow from a non Google server, doesn't this turn out to be a performance
>
On Wed, 2015-04-22 at 15:26 +, luca.muscarie...@orange.com wrote:
> Do I need to read this as all Google servers == all servers :)
>
>
Read again what I wrote. Don't play with my words.
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google serv
On Wed, 2015-04-22 at 08:51 +, luca.muscarie...@orange.com wrote:
> cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
Yep. This is what I mentioned with 'long rtt'. This was relative to BDP.
>
On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> > for the application layer, they do not change the TCP window size either
> > send or receive.
>
> I
Hmpff.
Apparently, the mechanisms we put in place are black magic for most
people.
Burn the magicians, so that good people can live in peace.
On Wed, 2015-04-15 at 20:15 -0700, Dave Taht wrote:
> see thread at: https://lkml.org/lkml/2015/4/15/448
>
On Thu, 2014-09-11 at 19:40 +0200, Steinar H. Gunderson wrote:
> On Thu, Sep 11, 2014 at 10:05:46AM -0700, Eric Dumazet wrote:
> >> Is anyone aware of any research either pointing out how their tuning
> >> algorithm works, or of known bugs/problems with the algorithm?
&
On Mon, 2014-09-01 at 13:23 -0400, Jerry Jongerius wrote:
> I am noticing (via WireShark traces) at times that Microsoft's (Windows 7)
> receive window auto-tuning goes horribly wrong, causing significant buffer
> bloat. And at other times, the tuning appears to work just fine.
>
> For example, B
On Mon, 2014-03-24 at 16:10 -0700, Dave Taht wrote:
> One of my concerns is that sch_fq is (?) currently inadaquately
> explored in the case of wifi - as best as I recall there were a
> ton of drivers than cloned and disappeared the skb deep
> in driver buffers, making fq_codel a mildly better cho
On Mon, 2014-03-24 at 10:09 -0700, Dave Taht wrote:
>
> It has long been my hope that conventional distros would start
> selecting sch_fq and sch_fq_codel up in safe scenarios.
>
> 1) Can an appropriate clocksource be detected from userspace?
>
> if [ have_good_clocksources ]
> then
> if [ i am
On Fri, 2014-03-21 at 22:13 +, Dave Taht wrote:
> Are you ready to make sch_fq the default in 3.15?
sch_fq depends on ktime_get(), so it is a no go if you have
clocksource using hpet. pfifo_fast doesn't have such issues.
Another issue is TCP CUBIC Hystart 'ACK TRAIN' detection that triggers
On Fri, 2014-03-21 at 18:08 +, Dave Taht wrote:
> This is likely to have brutal effects on slow uplinks and uploads
> without pacing enabled.
All I said was related to using fq/pacing, maybe it was not clear ?
___
Bloat mailing list
Bloat@lists.bu
On Fri, 2014-03-21 at 16:53 +0100, renaud sallantin wrote:
> For our tests, we needed to adjust the "tcp_initial_quantum" in the
> FQ,
> but as you said, it is just a FQ parameter.
>
Yep, default ones are a compromise between performance and pacing
accuracy. At 40Gbps speeds, it is a bit chall
On Fri, 2014-03-21 at 09:15 +0100, renaud sallantin wrote:
>
> FQ/pacing enables to do a lot of things,
> and as I already said, it could be used to easily implement the
> Initial Spreading.
> (we did it and it' s just a few lines to add, and a couple of
> parameters to change)
>
>
> But for t
On Thu, 2014-03-20 at 13:23 -0700, Dave Taht wrote:
> Well there is some good work in linux 3.14 and beyond, and there was also
> some interesting work on "initial spreading" presented at ietf.
>
> Hopefully patches for this will be available soon.
>
> http://tools.ietf.org/html/draft-sallantin-
On Wed, 2013-09-25 at 17:38 +0200, Luca MUSCARIELLO wrote:
> Then, I feel like FQ is a bad name to call this "newFQ".
> It's an implementation of a fair TCP pacer. Which is very useful, but FQ
> is kind of misleading, IMHO.
No problem, feel free to send a patch. I am very bad at choosing names.
On Tue, 2013-09-24 at 14:25 +0200, James Roberts wrote:
> No one responded to Luca's Sept 1 comment (on the bloat list) that the
> new code seems to do tail drop rather than longest queue drop.
>
>
> If this is so, bandwidth sharing will not be fair since FQ alone is
> not enough. This was shown
On Sat, 2013-08-31 at 13:47 -0700, Dave Taht wrote:
>
>
> Eric Dumazet just posted a pure fq scheduler (using the highly
> optimized red/black trees in the kernel)
>
> http://marc.info/?l=linux-netdev&m=137740009008261&w=2
>
>
> which "scales to mil
On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
> Hi,
>
> I was explaining the bufferbloat problem to some undergrad students
> showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
> asked them to find a solution for the problem and someone pointed at
> Fig. 1 and said "Th
On Thu, 2013-06-06 at 15:55 +0200, Jesper Dangaard Brouer wrote:
> Requesting comments.
>
> So, bacically commit 56b765b79 (htb: improved accuracy at high rates),
> broke the "linklayer atm" handling. As it didn't update the iproute tc util.
>
> Treating this as a regression fix, this is the sma
On Tue, 2013-06-04 at 13:26 -0700, Dave Taht wrote:
>
>
> I'm not worried about it but will find out shortly in cerowrt.
>
> I take it this does not fix the DSL/ATM issue however?
>
>
Really, the DSL/ATM stuff should use the STAB thing.
I suppose you already disabled GSO/TSO anyway.
__qdi
On Tue, 2013-06-04 at 22:21 +0200, Jesper Dangaard Brouer wrote:
> But how is this 64-bit usage going to affect performance for smaller
> ARM/MIPS based home routers, where shaping at these low rates is more
> relevant? (I'm just asking because I don't know, and just test this
> on a 24-CPU machi
From: Eric Dumazet
commit 56b765b79 ("htb: improved accuracy at high rates") added another
regression for low rates, because it mixes 1ns and 64ns time units.
So the maximum delay (mbuffer) was not 60 second, but 937 ms.
Lets convert all time fields to 1ns as 64bit arches are becomin
On Tue, 2013-06-04 at 08:55 -0700, Eric Dumazet wrote:
> On Tue, 2013-06-04 at 08:18 -0700, Eric Dumazet wrote:
>
> > I have a good idea of what's going on for htb at low rates, I am testing
> > a fix, thanks for the report !
> >
>
> Yes, we need to convert w
On Tue, 2013-06-04 at 08:18 -0700, Eric Dumazet wrote:
> I have a good idea of what's going on for htb at low rates, I am testing
> a fix, thanks for the report !
>
Yes, we need to convert whole thing to use ns units, instead
of a mix of 64ns and 1ns units.
Please test the f
On Tue, 2013-06-04 at 14:13 +0200, Jesper Dangaard Brouer wrote:
> Hi again,
>
> I found another regression by commit 56b765b79 (htb: improved accuracy
> at high rates).
>
> After the commit HTB does not honor network rate limiting below 500kbps.
>
> I have found that the bandwidth problem is re
ata[] array.
"tc ... linklayer atm " only perturbs values in the 256 slots array.
Signed-off-by: Eric Dumazet
---
net/sched/sch_api.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index 2b935e7..281c1
On Thu, 2013-05-30 at 09:51 +0200, Jesper Dangaard Brouer wrote:
> On Wed, 29 May 2013 08:52:04 -0700
> Eric Dumazet wrote:
> > I am not sure it will solve the ATM logic (with the 5 bytes overhead
> > per 48 bytes cell)
>
> Are you talking about, that for GSO frames w
On Wed, 2013-05-29 at 15:50 -0700, Stephen Hemminger wrote:
> On Wed, 29 May 2013 08:52:04 -0700
> Eric Dumazet wrote:
>
> > On Wed, 2013-05-29 at 15:13 +0200, Jesper Dangaard Brouer wrote:
> > > I recently discovered that the (traffic control) tc linklayer
> > &g
On Wed, 2013-05-29 at 15:13 +0200, Jesper Dangaard Brouer wrote:
> I recently discovered that the (traffic control) tc linklayer
> calculations for ATM/ADSL have been broken by:
> commit 56b765b79 (htb: improved accuracy at high rates).
>
> Thus, people shaping on ADSL links, using e.g.:
> tc cl
1 - 100 of 176 matches
Mail list logo