All current TCP implementations have a minimum window size of two segments.
If you have 20 open connections, then the minimum bandwidth TCP will
attempt to consume is (2 segments * 20 connection)/latency. If you have
very low latency relative to your bandwidth, the sender will not respond to
congestion.

On Thu, Apr 6, 2017 at 9:47 PM, cloneman <clone...@gmail.com> wrote:

> Hi,
>
> Appologies in advance if this is the wrong place to ask, I haven't been
> able to locate an official discussion board.
>
> I'm looking for any comments on Steam's game distribution download system
> - specifically how it defeats any bufferbloat mitigation system I've used.
>
> It seems to push past inbound policers, exceeding them by about 40%. That
> is to say, if you police steam traffic to half of your line rate, enough
> capacity will remain to avoid packet loss, latency, jitter etc. Obviously
> this is too much bandwidth to reserve.
>
> Without any inbound control, you can expect very heavy packet loss and
> jitter. With fq_codel or sfq and taking the usual recommended 15% off the
> table, you get improved, but still unacceptable performance in your small
> flows / ping etc.
>
> The behavior can be observed by downloading any free game on their
> platform. I'm trying to figure out how they've accomplished this and how to
> mitigate this behavior. It operates with 20 http connections
> simultaneously, which is normally not an issue (multiple web downloads
>  perform well under fq_codel)
>
> Note: in my testing cable and vdsl below 100mbit were vulnerable to this
> behavior, while fiber was immune.
>
> Basically there are edge cases on the internet that like to push too many
> bytes down a line that is dropping or delaying packets. I would like to see
> more discussion on this issue.
>
> I haven't tried tweaking any of the parameters / latency targets in
> fq_codel.
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to