That makes sense. Ok. On Wed, Apr 22, 2015 at 12:14 PM, Simon Barber <si...@superduper.net> wrote:
> If you set the window only a little bit larger than the actual BDP of > the link then there will only be a little bit of data to fill buffer, so > given large buffers it will take many connections to overflow the buffer. > > Simon > > Sent with AquaMail for Android > http://www.aqua-mail.com > > On April 21, 2015 4:18:10 PM jb <jus...@dslr.net> wrote: > >> Regarding the low TCP RWIN max setting, and smoothness. >> >> One remark up-thread still bothers me. It was pointed out (and it makes >> sense to me) that if you set a low TCP max rwin it is per stream, but if >> you do multiple streams you are still going to rush the soho buffer. >> >> However my observation with a low server rwin max was that the smooth >> upload graph was the same whether I did 1 upload stream or 6 upload >> streams, or apparently any number. >> I would have thought that with 6 streams, the PC is going to try to flood >> 6x as much data as 1 stream, and this would put you back to square one. >> However this was not what happened. It was puzzling that no matter what, >> one setting server side got rid of the chop. >> Anyone got any plausible explanations for this ? >> >> if not, I'll run some more tests with 1, 6 and 12, to a low rwin server, >> and post the graphs to the list. I might also have to start to graph the >> interface traffic on a sub-second level, rather than the browser traffic, >> to make sure the browser isn't lying about the stalls and chop. >> >> This 7800N has setting for priority of traffic, and utilisation (as a >> percentage). Utilisation % didn't help, but priority helped. Making web low >> priority and SSH high priority smoothed things out a lot without changing >> the speed. Perhaps "low" priority means it isn't so eager to fill its >> buffers.. >> >> thanks >> >> >> On Wed, Apr 22, 2015 at 8:13 AM, jb <jus...@dslr.net> wrote: >> >>> Today I've switched it back to large receive window max. >>> >>> The customer base is everything from GPRS to gigabit. But I know from >>> experience that if a test doesn't flatten someones gigabit connection they >>> will immediately assume "oh congested servers, insufficient capacity" and >>> the early adopters of fiber to the home and faster cable products are the >>> most visible in tech forums and so on. >>> >>> It would be interesting to set one or a few servers with a small receive >>> window, take them from the pool, and allow an option to select those, >>> otherwise they would not participate in any default run. Then as you point >>> out, the test can suggest trying those as an option for results with >>> chaotic upload speeds and probable bloat. The person would notice the >>> beauty of the more intimate connection between their kernel and a server, >>> and work harder to eliminate the problematic equipment. Or. They'd stop >>> telling me the test was bugged. >>> >>> thanks >>> >>> >>> On Wed, Apr 22, 2015 at 12:28 AM, David Lang <da...@lang.hm> wrote: >>> >>>> On Tue, 21 Apr 2015, David Lang wrote: >>>> >>>> On Tue, 21 Apr 2015, David Lang wrote: >>>>> >>>>> I suspect you guys are going to say the server should be left with a >>>>>>> large >>>>>>> max receive window.. and let people complain to find out what their >>>>>>> issue >>>>>>> is. >>>>>>> >>>>>> >>>>>> what is your customer base? how important is it to provide faster >>>>>> service to teh fiber users? Are they transferring ISO images so the >>>>>> difference is significant to them? or are they downloading web pages >>>>>> where >>>>>> it's the difference between a half second and a quarter second? remember >>>>>> that you are seeing this on the upload side. >>>>>> >>>>>> in the long run, fixing the problem at the client side is the best >>>>>> thing to do, but in the meantime, you sometimes have to work around >>>>>> broken >>>>>> customer stuff. >>>>>> >>>>> >>>>> for the speedtest servers, it should be set large, the purpose is to >>>>> test the quality of the customer stuff, so you don't want to do anything >>>>> on >>>>> your end that papers over the problem, only to have the customer think >>>>> things are good and experience problems when connecting to another server >>>>> that doesn't implement work-arounds. >>>>> >>>> >>>> Just after hitting send it occured to me that it may be the right thing >>>> to have the server that's being hit by the test play with these settings. >>>> If the user works well at lower settings, but has problems at higher >>>> settings, the point where they start having problems may be useful to know. >>>> >>>> David Lang >>>> _______________________________________________ >>>> Bloat mailing list >>>> Bloat@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/bloat >>>> >>>> >>> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >> >>
_______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat