Yes, that issue is now fixed.
https://interop.cs7.tf.fau.de/?run=logs_paper_2022-03-15
We'll update the paper accordingly, thanks for your feedback.
We'll also reference Robin Marx's "Same Standards, Different Decisions:
A Study of QUIC and HTTP/3 Implementation Diversity" paper, which is
obviously relevant in this context, especially regarding the influence
of flow control.
On 28.02.22 19:49, Su, Chi-Jiun wrote:
This problem in the test may be gone with the new fix.
"Although only a file of 10 MiB is transferred, this leads to
approximately 60 to 80 MiB of data being sent."
thanks.
cj
------------------------------------------------------------------------
*From:* EToSat <[email protected]> on behalf of Christian Huitema
<[email protected]>
*Sent:* Sunday, February 27, 2022 12:58 AM
*To:* Nicolas Kuhn <[email protected]>
*Cc:* Su, Chi-Jiun <[email protected]>;
[email protected] <[email protected]>; Sebastian Endres <[email protected]>;
[email protected] <[email protected]>; [email protected]
<[email protected]>
*Subject:* Re: [EToSat] Interop runner with satellite links
***EXTERNAL EMAIL***
On 2/25/2022 10:24 PM, Nicolas Kuhn wrote:
On the interaction between congestion control and coding, you may be
interested in looking at :
https://datatracker.ietf.org/doc/html/draft-irtf-nwcrg-coding-and-congestion-12
<https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/draft-irtf-nwcrg-coding-and-congestion-12__;!!Emaut56SYw!jNkAqwXdSBCHAMUuYW8PPcglW-20gL3A6g374RkqkAJtdwP1lCyedBGvudZYRl-1xA$>
There have been other interesting contributions on the integration of FEC
in QUIC :
-https://datatracker.ietf.org/doc/html/draft-swett-nwcrg-coding-for-quic-04
<https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/draft-swett-nwcrg-coding-for-quic-04__;!!Emaut56SYw!jNkAqwXdSBCHAMUuYW8PPcglW-20gL3A6g374RkqkAJtdwP1lCyedBGvudY_7bqPLg$>
-https://datatracker.ietf.org/doc/draft-roca-nwcrg-rlc-fec-scheme-for-quic/
<https://urldefense.com/v3/__https://datatracker.ietf.org/doc/draft-roca-nwcrg-rlc-fec-scheme-for-quic/__;!!Emaut56SYw!jNkAqwXdSBCHAMUuYW8PPcglW-20gL3A6g374RkqkAJtdwP1lCyedBGvudYazujiig$>
-
https://inl.info.ucl.ac.be/publications/quic-fec-bringing-benefits-forward-erasure-correction-quic.html
<https://urldefense.com/v3/__https://inl.info.ucl.ac.be/publications/quic-fec-bringing-benefits-forward-erasure-correction-quic.html__;!!Emaut56SYw!jNkAqwXdSBCHAMUuYW8PPcglW-20gL3A6g374RkqkAJtdwP1lCyedBGvudZKbpdRNw$>
I think the proposed approach (sending FEC packets when the file
transmission is over) had been experimented and will investigate whether
the results can be shared.
I will look, thanks.
In any case, I think that the adverse effects found by Sebastian are now
fixed. There was an issue when the client did not increase the flow
control window quickly enough. If the value of MAX_DATA or
MAX_STREAM_DATA is too small, the sender becomes blocked. In the old
builds, when the sender was blocked by flow control, the code decided
that it could just as well repeat packets preemptively rather than doing
nothing. This is somewhat debatable. It generally does not speed up
transfer, because flow control does not affect loss recovery. The new
code only does preemptive repeat at the end of file transfers: fewer
transmission, probably lower energy consumption, and slightly faster
completion of the file transfer compared to the old code.
I have uploaded a docket image with the new code in the interop runner.
I don't know whether I need to do something special for the satellite
interop runner.
-- Christian Huitema