Wait, this is a 15 years old experiment using Reno and a single test
bed, using ns simulator.
Naive TCP pacing implementations were tried, and probably failed.
Pacing individual packet is quite bad, this is the first lesson one
learns when implementing TCP pacing, especially if you try to drive a
one reference with pdf publicly available. On the website there are
various papers
on this topic. Others might me more relevant but I did not check all of
them.
Understanding the Performance of TCP Pacing,
Amit Aggarwal, Stefan Savage, and Tom Anderson,
IEEE INFOCOM 2000 Tel-Aviv, Israel, March
I did a little post processing on that data set (with sch_fq hitting
it's packet limit),
attached is the result and script that generated it, as well as the test script.
Based on this extremely limited analysis, I might argue that sch_fq does not,
actually, scale well to millions of flows at gigE,
Does this happen even with Sack?
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 22, 2015 10:36:11 AM David Lang wrote:
Data that's received and not used doesn't really matter (a tree falls in the
woods type of thing).
The head of line blocking can cause a chunk of p
On Wed, 2015-04-22 at 15:20 -0700, Simon Barber wrote:
> Wouldn't the LOWAT setting be much easier for applications to use if it was
> set in estimated time (ie time it will take to deliver the data) rather
> than bytes?
Sure, but you have all the info to infer one from the other.
Note also TCP
Wouldn't the LOWAT setting be much easier for applications to use if it was
set in estimated time (ie time it will take to deliver the data) rather
than bytes?
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 22, 2015 2:47:34 PM Eric Dumazet wrote:
On Wed, 2015-04-22
On Wed, Apr 22, 2015 at 02:42:59PM -0700, Eric Dumazet wrote:
> Sorry, I do not understand you.
>
> The nice thing about TCP_NOTSENT_LOWAT is that you no longer have to
> care about choosing the 'right SO_SNDBUF'
>
> It is still CC responsibility to choose/set cwnd, but you hadn't set an
> artifi
On Wed, 2015-04-22 at 14:05 -0700, Rick Jones wrote:
> On 04/22/2015 02:02 PM, Eric Dumazet wrote:
> > Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
>
> Don't go telling Dave about that, he wants me to put too much into
> netperf as it is!-)
Note that one can also set a sysc
On Wed, Apr 22, 2015 at 2:42 PM, Eric Dumazet wrote:
> On Wed, 2015-04-22 at 23:07 +0200, Steinar H. Gunderson wrote:
>> On Wed, Apr 22, 2015 at 02:02:32PM -0700, Eric Dumazet wrote:
>> > Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
>>
>> But this is only for when your data c
On Wed, 2015-04-22 at 23:07 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 22, 2015 at 02:02:32PM -0700, Eric Dumazet wrote:
> > Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
>
> But this is only for when your data could change underway, right?
> Like, not relevant for send
On Wed, Apr 22, 2015 at 02:02:32PM -0700, Eric Dumazet wrote:
> Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
But this is only for when your data could change underway, right?
Like, not relevant for sending one big file, but might be relevant for e.g.
VNC (or someone mentione
On 04/22/2015 02:02 PM, Eric Dumazet wrote:
Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
Don't go telling Dave about that, he wants me to put too much into
netperf as it is!-)
rick
___
Bloat mailing list
Bloat@lists.bufferblo
Yeah, the real nice thing is TCP_NOTSENT_LOWAT added in linux-3.12
On Wed, 2015-04-22 at 12:28 -0700, Dave Taht wrote:
> SO_SNDLOWAT or something similar to it with a name I cannot recall,
> can be useful.
>
> On Wed, Apr 22, 2015 at 12:10 PM, Hal Murray wrote:
> >
> >> As I understand it (I
SO_SNDLOWAT or something similar to it with a name I cannot recall,
can be useful.
On Wed, Apr 22, 2015 at 12:10 PM, Hal Murray wrote:
>
>> As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
>> for the application layer, they do not change the TCP window size either
>> send
So it looks like the number you feed it turns into the window size.
A few quick tests with netperf confirm that it is doing something close to
what I expect but I haven't fired up tcpdump to verify that the window size
is what I asked for. netperf does print out values that are 2x what I asked
f
> As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> for the application layer, they do not change the TCP window size either
> send or receive. Which is perhaps why they aren't used much. They don't do
> much good in iperf that's for sure! Might be wrong, but I agree with
> On 22 Apr, 2015, at 21:39, MUSCARIELLO Luca IMT/OLN
> wrote:
>
> Now, I forgot how sch_fq pacing rate is initialized to be effective from the
> very first window.
IIRC, it’s basically a measurement of the RTT during the handshake, and you
then pace to deliver the congestion window during t
> On 22 Apr, 2015, at 18:59, Dave Taht wrote:
>
> * Cake totally controls things
Yay! And there really aren’t very many collisions there, so the FQ logic is
still mostly working as designed.
> * As does cake flowblind, but less so
That is surprising, actually. With shaping off, all packets
This is not clear to me in general.
I can understand that for the first shot of the IW>=10 pacing is always
a win strategy no matter what queuing system you have
because it reduces the loss probability in that window. Still fq_codel
would reduce that probability even more.
But for long flows
On Wed, 2015-04-22 at 10:53 -0700, Jim Gettys wrote:
> Actually, fq_codel's sparse flow optimization provides a pretty strong
> incentive for pacing traffic.
>
>
> If your TCP traffic is well paced, and is running at a rate below that
> of the bottleneck, then it will not build a queue.
>
>
>
There is some talk of improving wikipedia´s organisation and articles
on these subjects. Me I have longed to have some time to curate what
is already out there, and perhaps getting wikipedia straightened out
would be a better approach than anything else.
See:
https://en.wikipedia.org/wiki/Talk:Ne
On Wed, 2015-04-22 at 19:28 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> On 04/22/2015 07:16 PM, Eric Dumazet wrote:
> >
> > sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> > queues : Smaller bursts, less self inflicted drops.
>
> This I understand. But it can't protect from
Jim-
Agreed.
So, I amend my comments to say that there are SEVERAL benefits to pacing – even
when using modern queue management algorithms.
Bvs
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
Bill Ver Steeg
DISTINGUISHED ENGINEER
vers...@cisco.com
From: gettys
Actually, fq_codel's sparse flow optimization provides a pretty strong
incentive for pacing traffic.
If your TCP traffic is well paced, and is running at a rate below that of
the bottleneck, then it will not build a queue.
It will then be recognized as a "good guy" flow, and scheduled
preferentia
I remember a paper by Stefan Savage of about 15 years ago where he
substantiates this in clearer terms.
If I find the paper I'll send the reference to the list.
On 04/22/2015 07:28 PM, MUSCARIELLO Luca IMT/OLN wrote:
Exactly. Two same CC modules competing on the same link, one w pacing
the oth
Data that's received and not used doesn't really matter (a tree falls in the
woods type of thing).
The head of line blocking can cause a chunk of packets to be retransmitted, even
though the receiving machine got them the first time. So looking at the received
bytes gives you a false picture o
On 04/22/2015 07:16 PM, Eric Dumazet wrote:
On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:
FQ gives you flow isolation.
So does fq_codel.
yes, the FQ part of fq_codel. that's what I meant. Not the FQ part of
sch_fq.
sch_fq adds *pacing*, which in itself has benefits
On Wed, Apr 22, 2015 at 10:16:19AM -0700, Eric Dumazet wrote:
> sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> queues : Smaller bursts, less self inflicted drops.
Somehow I think sch_fq should just have been named sch_pacing :-)
/* Steinar */
--
Homepage: http://www.ses
On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> FQ gives you flow isolation.
So does fq_codel.
sch_fq adds *pacing*, which in itself has benefits, regardless of fair
queues : Smaller bursts, less self inflicted drops.
If flows are competing, this is the role of Congestion C
On 04/22/2015 09:19 AM, Dave Taht wrote:
Has anyone added pacing to netperf yet? (I can do so, but would need
guidance as to what getopt option to add)
./configure --enable-intervals
recompile netperf and then you can use:
netperf ... -b -w
If you want to be able to specify an interval sho
Actually, pacing does provide SOME benefit even when there are AQM schemes in
place. Sure, the primary benefit of pacing is in the presence of legacy buffer
management algorithms, but there are some additional benefits when used in
conjunction with a modern queue management scheme.
Let's condu
On 04/22/2015 05:44 PM, Eric Dumazet wrote:
On Wed, 2015-04-22 at 15:26 +, luca.muscarie...@orange.com wrote:
Do I need to read this as all Google servers == all servers :)
Read again what I wrote. Don't play with my words.
let the stupid guy ask questions.
In the worst case don't answ
On Wed, Apr 22, 2015 at 8:59 AM, Steinar H. Gunderson
wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +, luca.muscarie...@orange.com wrote:
>> BTW if a paced flow from Google shares a bloated buffer with a non paced
>> flow from a non Google server, doesn't this turn out to be a performance
>> pe
On Wed, 2015-04-22 at 17:59 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +, luca.muscarie...@orange.com wrote:
> > BTW if a paced flow from Google shares a bloated buffer with a non paced
> > flow from a non Google server, doesn't this turn out to be a performance
>
I wanted to exercise cake's 8 way set associative hash, so I tried to
get to where I had 450 flows going at gigE and could see collisions.
Then out of perverse curiosity, I went and looked at some other qdiscs
like pie, sch_fq, codel, ns2_codel etc. I will not claim to have been
terribly scientific
On Wed, Apr 22, 2015 at 03:26:27PM +, luca.muscarie...@orange.com wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non paced
> flow from a non Google server, doesn't this turn out to be a performance
> penalty for the paced flow?
Nope. The paced flow puts less strain on
On Wed, 2015-04-22 at 15:26 +, luca.muscarie...@orange.com wrote:
> Do I need to read this as all Google servers == all servers :)
>
>
Read again what I wrote. Don't play with my words.
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google serv
Do I need to read this as all Google servers == all servers :)
BTW if a paced flow from Google shares a bloated buffer with a non paced flow
from a non Google server, doesn't this turn out to be a performance penalty
for the paced flow?
fq_codel gives incentives to do pacing but if it's not de
The bumps are due to packet loss causing head of line blocking. Until the
lost packet is retransmitted the receiver can't release any subsequent
received packets to the application due to the requirement for in order
delivery. If you counted received bytes with a packet counter rather than
look
Yes - the classic one is TCP Vegas.
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 22, 2015 5:03:26 AM jb wrote:
So I find a page that explains SO_RCVBUF is allegedly the most poorly
implemented on Linux, vs Windows or OSX, mainly because the one you start
with is th
On Wed, Apr 22, 2015 at 06:50:57AM -0700, Eric Dumazet wrote:
> You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.
>
> (All Google fleet at least)
I think Google is a bit ahead of the curve here :-) Does any distribution
ship sch_fq by default yet?
/* Steinar */
--
Homepage:
DMCA takedown request?
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 22, 2015 4:42:38 AM Rich Brown wrote:
Google Alerts informed me that site bufferbloat dot fashionsnowboots dot
com is talking about "Bufferbloat". Sure enough, it appears to have cloned
the content from
On Wed, 2015-04-22 at 08:51 +, luca.muscarie...@orange.com wrote:
> cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
Yep. This is what I mentioned with 'long rtt'. This was relative to BDP.
>
> On 22 Apr, 2015, at 15:02, jb wrote:
>
> ...data is needed to shrink the window to a new setting, instead of slamming
> it shut by setsockopt
I believe that is RFC-compliant behaviour; one is not supposed to renege on an
advertised TCP receive window. So Linux holds the rwin pointer in pla
So I find a page that explains SO_RCVBUF is allegedly the most poorly
implemented on Linux, vs Windows or OSX, mainly because the one you start
with is the cap, you can go lower, but not higher, and data is needed to
shrink the window to a new setting, instead of slamming it shut by
setsockopt
Nev
Google Alerts informed me that site bufferbloat dot fashionsnowboots dot com is
talking about "Bufferbloat". Sure enough, it appears to have cloned the content
from our home page with the following additions:
- It offers to install the Java plugin 12.3
- All the links on the page encourage you t
cons: large BDP in general would be negatively affected.
A Gbps access vs a DSL access to the same server would require very different
tuning.
sch_fq would probably make the whole thing less of a problem.
But running it in a VM does not sound a good idea and would not reflect usual
servers setti
47 matches
Mail list logo