On Sat, Feb 26, 2022 at 3:40 AM Christian Huitema <[email protected]> wrote:
> > On 2/21/2022 10:16 AM, Su, Chi-Jiun wrote: > > Hi Sebastian, > > > > Thanks for the great work. > > Some comments/questions. > > > > * Sec. IV,D, 5: Interesting to know "picoquic" does speculative > retransmission. As you argue, this may not always help. Did you confirm > with the author? > > This is indeed supported in picoquic. There are APIs that allows the > implementation to turn the feature on or off: > `picoquic_set_preemptive_repeat_policy(quic_context, do_repeat)` is used > to set the policy per Quic context, i.e., for all new connections; > `picoquic_set_preemptive_repeat_per_cnx(connection_context, do_repeat)` > is used to set the policy for a specific connection. > > The speculative retransmission happens at the lowest priority, i.e., > only happens if there is nothing else to send. It is subject to > congestion control and rate limiting. The "nothing to send" rule means > that it will only kick after all data of a stream and the FIN mark have > been sent. The selection of data for speculative retransmission is based > on stream level acknowledgements. The code deduces the acknowledged > parts of a data stream from the packet acknowledgements. It will look at > data that has been sent, but not yet acknowledged. > > As you say, this does not always help. If the packet loss rate is low, > most of the preemptively repeated packets will be useless. On the other > hand, when sending a file over a lossy link, the application may wait a > long time for retransmission of packets lost in the last RTT. If loss > rate and data rates are high enough, some of these packets will have to > be repeated twice, or maybe three times. So we have a tradeoff: waste > bandwidth, or waste time. The API allows the application to consider > that tradeoff and decide whether it is useful or not. > > I would very much like to replace the current implementation of > preemptive repeat by some version of FEC. FEC in general is a poor fit > for transmission of big files, because it is always less bandwidth > efficient than just repeating the packets that are lost. But if the > application knows that the file transmission is almost complete, because > there is just one CWIN worth of data left, it could turn own FEC for the > the duration of the last RTT. It will "waste" a modest number of FEC > packets, while drastically reducing the impact of packet losses on the > duration of the file transfer. We could even imagine only sending the > FEC packets after the FIN mark has been sent, making sure that FEC does > not increase the duration of transfer if there were no errors. > > On the interaction between congestion control and coding, you may be interested in looking at : https://datatracker.ietf.org/doc/html/draft-irtf-nwcrg-coding-and-congestion-12 There have been other interesting contributions on the integration of FEC in QUIC : - https://datatracker.ietf.org/doc/html/draft-swett-nwcrg-coding-for-quic-04 - https://datatracker.ietf.org/doc/draft-roca-nwcrg-rlc-fec-scheme-for-quic/ - https://inl.info.ucl.ac.be/publications/quic-fec-bringing-benefits-forward-erasure-correction-quic.html I think the proposed approach (sending FEC packets when the file transmission is over) had been experimented and will investigate whether the results can be shared. > -- Christian Huitema > > > > * The results show loss-based CC does not perform well compared to > BBR > > * Production software performs well more like picoquic or not ? > > * how big difference is between production vs these > implementations in the test? > > * Any Time-Offset graphs for EUTELSAT case ? > > * Research overview page: additional column on emulated or real Sat > link will be helpful > > > > Good useful work. > > thanks. > > cj > > ________________________________ > > From: QUIC <[email protected]> on behalf of Sebastian Endres < > [email protected]> > > Sent: Friday, February 18, 2022 7:02 AM > > To: [email protected] <[email protected]>; [email protected] <[email protected]> > > Cc: [email protected] <[email protected]> > > Subject: Re: [EToSat] Interop runner with satellite links > > > > **EXTERNAL EMAIL** > > > > Dear all, > > > > we've published a pre-print of our paper in which we present the > QUIC-Interop-runner extended to include satellite scenarios and our > measurement results using numerous publicly available QUIC implementations: > > > > > https://urldefense.com/v3/__https://arxiv.org/abs/2202.08228__;!!Emaut56SYw!mFvp0jheUG95wMrw7L6ZHr3AUjZZgp63MEXhAZnEYvwEkQITw9uIFauoKkW-JRj-Kg$ > > > > Best regards, > > > > Sebastian > > > > On Mittwoch, 29. September 2021 21:38:05 CET Sebastian Endres wrote: > >> Dear all, > >> > >> for my master's thesis we ran measurements of all publicly available > QUIC implementations over an emulated satellite link. The results are > available online: > https://urldefense.com/v3/__https://interop.sedrubal.de/__;!!Emaut56SYw!mFvp0jheUG95wMrw7L6ZHr3AUjZZgp63MEXhAZnEYvwEkQITw9uIFauoKkUlf0EgaQ$ > >> > >> A click on the results also shows time-offset plots, but are not > available for every combination. > >> > >> In general, the performance of QUIC over high latency (e.g., > geostationary satellites) is rather poor, especially if there is packet > loss. > >> > >> Would it make sense to add such tests with challenging link > characteristics to the official QUIC interop runner? > >> > >> Best regards, > >> > >> Sebastian > >> > >> > >> _______________________________________________ > >> EToSat mailing list > >> [email protected] > >> > https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/etosat__;!!Emaut56SYw!mFvp0jheUG95wMrw7L6ZHr3AUjZZgp63MEXhAZnEYvwEkQITw9uIFauoKkXTZO8rhQ$ > >> > > > > > > > > _______________________________________________ > > EToSat mailing list > > [email protected] > > https://www.ietf.org/mailman/listinfo/etosat > > _______________________________________________ > EToSat mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/etosat >
