Pardon, but cwnd should NEVER be larger than the number of forwarding hops 
between source and destination.
Kleinrock and students recently proved that the optimum cwnd for both 
throughput and minimized latency is achieved when there is one packet or less 
in each outbound queue from source to destination (including cross traffic - 
meaning other flows sharing the same outbound queue.
 
Now the idea that cwnd should be in the 1000's of packets is totally absurd, 
unless the source or destination buffers (at the end hosts) are counted, and 
that would be needed if the TCP source and destination application might, for 
example, be "swapped out" and thus unable to actually send and acknowledge 
packets at the instant of receiving an ACK.
 
If cwnd is sort of compensating for "swapping out" the TCP endpoint processes 
so that they take milliseconds to provide or acknowledge receipt of a packet, 
then that's fine (if you want throughput and terrible latency), but that's not 
the congestion window. That's just cramming the operating system's scheduling 
delay into the TCP stack.
 
TCP is not supposed to be designed around slow OS process schedulers. Those 
buffers should never be allowed to build up in the transport network, where 
they kill latency for everyone. That's just terrible design, conflating OS 
scheduling with congestion management.
 
 
On Thursday, May 16, 2019 6:01pm, "Jonathan Foulkes" <j...@jonathanfoulkes.com> 
said:



> Thanks for sharing Dave.
> 
> A good paper, but there are few gaps worthy of mentioning on this list:
> 
> Testing when there is an AQM present means the test must adapt to the 
> challenge of
> smaller cwnd existing for any one stream, therefore it will take many more 
> streams
> to saturate a line with cwnd = 30 than if the cwnd is allowed to grow to
> >1,000
> In general, the impact of cwnd relative to saturation and impact on delay was 
> not
> visited, and yet it’s critical. One of the reasons for spiky delays on high
> speed lines is the ginormous cwnds hogging the line with their 800ms+ RTT’s
> 
> Asymmetry of provisioned upload relative to download, at some point, the
> ack-stream can be held up by either lack of capacity or bloat in the uplink. 
> So
> even though a link can deliver 300Mbps down, a bloated uplink of 5mbps might 
> never
> allow that level to be reached.
> There are ISPs provisioning truly crazy asymmetric service.
> 
> They do make a good point about the local network, WiFi specifically being 
> the new
> bottleneck, which is why we included an iperf instance that can be started on 
> the
> IQrouter to help run client to server tests that help spot local network 
> capacity
> limits, typically on WiFi.
> 
> Regarding their point about ‘Cross traffic’ impact on measurements,
> Cake’s per-host / per-target fairness also complicates AQM-enabled testing
> from client devices. Which is why we make the built-in speed test the arbiter 
> of
> true line capacity, as it factors for ALL traffic flowing through the router. 
> But,
> as you mention, that is also a challenge from a CPU resource standpoint on 
> higher
> speeds.
> 
> The biggest gap in this paper is not paying sufficient attention to latency 
> as a
> critical metric, and one that is controllable by an AQM. Bufferbloat metrics 
> have
> more impact on end-user experience than +/- 50Mbps on a 100mbps baseline.
> I was rather miffed they do not even mention the DSLreports.com speedtest, or 
> the
> fast.com test, as those are the two that provide a bufferbloat metric.
> 
> The industry as whole MUST pay attention and socialize the relevancy of 
> managed
> latencies as being critical to customer satisfaction and good application
> performance. And that starts with tests that clearly grade that critical 
> aspect.
> 
> Cheers,
> 
> Jonathan
> 
> > On May 15, 2019, at 3:58 AM, Dave Taht <dave.t...@gmail.com> wrote:
> >
> > If it helps any: Nick Feamster and Jason Livingood just published "
> > Internet Speed Measurement: Current Challenges and Future
> > Recommendations " ( https://arxiv.org/pdf/1905.02334.pdf ) a few days
> > ago, and outlines quite a few problems going forward at higher speeds.
> > I do wish the document had pointed out more clearly that router based
> > measurements have problems also, with weaker cpus unable to source
> > enough traffic for an accurate measurement, but I do hope this
> > document has impact, and it's a good read, regardless.
> >
> > Still, somehow getting it right at lower speeds is always on my mind.
> > I'd long ago hoped that DSL devices would adopt BQL, and that
> > cablemodems would also, thus moving packet processing a little higher
> > on the stack so more advanced algorithms like cake could take hold.
> >
> > On Wed, May 15, 2019 at 9:32 AM Sebastian Moeller <moell...@gmx.de>
> wrote:
> >>
> >> Hi All,
> >>
> >>
> >> I believe the following to be relevant to this discussion:
> https://apenwarr.ca/log/20180808
> >> Where he discusses a similar idea including implementation albeit aimed
> at lower bandwidth and sans the automatic bandwidth tracking.
> >>
> >>
> >>> On May 15, 2019, at 01:34, David P. Reed <dpr...@deepplum.com>
> wrote:
> >>>
> >>>
> >>> Ideally, it would need to be self-configuring, though... I.e.,
> something
> >>> like the IQRouter auto-measuring of the upstream bandwidth to tune
> the
> >>> shaper.
> >>
> >> @Jonathan from your experience how tricky is it to get reliable speedtest
> endpoints and how reliable are they in practice. And do you do any 
> sanitization,
> like take another measure immediate if the measured rate differs from the 
> last by
> more than XX% or something like that?
> >>
> >>
> >>>
> >>> Sure, seems like this is easy to code because there are exactly two
> ports to measure, they can even be labeled physically "up" and "down" to 
> indicate
> their function.
> >>
> >> IMHO the real challenge is automated measurements over the internet at
> Gbps speeds. It is not hard to get some test going (by e.g. tapping into 
> ookla's
> fast net of confederated measurement endpoints) but getting something where 
> the
> servers can reliably saturate 1Gbps+ seems somewhat trickier (last time I 
> looked
> one required a 1Gbps connection to the server to participate in speedtest.net,
> obviously not really suited for measuring Gbps speeds).
> >> In the EU there exists a mandate for national regulators to establish
> and/or endorse an anointed "official" speedtests, untended to keep ISP 
> marketing
> honest, that come with stricter guarantees (e.g. the official German 
> speedtest,
> breitbandmessung.de will only admit tests if the servers are having sufficient
> bandwidth reserves to actually saturate the link; the enduser is required to
> select the speed-tier giving them a strong hint about the required rates I
> believe).
> >> For my back-burner toy project "per-packet-overhead estimation on
> arbitrary link technology" I am currently facing the same problem, I need a
> traffic sink and source that can reliably saturate my link so I can measure
> maximum achievable goodput, so if anybody in the list has ideas, I am all
> ears/eyes.
> >>
> >>>
> >>> For reference, the GL.iNet routers are tiny and nicely packaged, and
> run
> >>> OpenWrt; they do have one with Gbit ports[0], priced around $70. I
> very
> >>> much doubt it can actually push a gigabit, though, but I haven't had
> a
> >>> chance to test it. However, losing the WiFi, and getting a slightly
> >>> beefier SoC in there will probably be doable without the price going
> >>> over $100, no?
> >>>
> >>> I assume the WiFi silicon is probably the most costly piece of
> intellectual property in the system. So yeah. Maybe with the right parts being
> available, one could aim at $50 or less, without sales channel markup. 
> (Raspberry
> Pi ARM64 boards don't have GigE, and I think that might be because the GigE
> interfaces are a bit pricey. However, the ARM64 SoC's available are typically
> Celeron-class multicore systems. I don't know why there aren't more ARM64 
> systems
> on a chip with dual GigE, but I suspect searching for them would turn up 
> some).
> >>
> >> The turris MOX (https://www.turris.cz/en/specification/) might be a
> decent startimg point as it comes with one Gbethernet port and both a SGMII 
> and a
> PCIe signals routed on a connector, they also have a 4 and an 8 port switch
> module, but for our purposes it might be possible to just create a small 
> single Gb
> ethernet port board to get started.
> >>
> >> Best Regards
> >> Sebastian
> >>
> >>>
> >>> -Toke
> >>>
> >>> [0] https://www.gl-inet.com/products/gl-ar750s/
> >>> _______________________________________________
> >>> Cerowrt-devel mailing list
> >>> cerowrt-de...@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> >
> > --
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-205-9740
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> 
> 
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to