Hi Ingmar,

On 12.11.21 17:28, Ingemar Johansson S wrote:
Hi Jörg, Mathis + others

It was nice to learn about your activity to try and use SCReAM as example algorithm to integrate with QUIC. Pages 14-25 in https://datatracker.ietf.org/meeting/112/materials/slides-112-avtcore-ietf-112-avtcore-03 <https://datatracker.ietf.org/meeting/112/materials/slides-112-avtcore-ietf-112-avtcore-03>

Did you use the new gsteamer plugin from https://github.com/EricssonResearch/scream/tree/master/gstscream <https://github.com/EricssonResearch/scream/tree/master/gstscream>  ?

Well, we use your C++ version (forked a bit ago), not the plugin you
refer to.  This experiment has already been ongoing for some time.

Observations/Comments:

+ SCReAM + Reno : Strange that the throughput dropped like that but perhaps an unlucky outcome of two cascaded congestion controls.

Nested control loops may not play out that well, and this seems just
one artifact of this.

+ Split of network congestion control and media rate control : QUIC already today has the congestion control on the connection level, it is then up to the individual streams to deliver media, subject to the individual stream priorities. SCReAM is quite similar in that respect, one difference is perhaps the implementation of the media rate control.

It is but it attends to the specific needs of real-time media, which
cannot really be said for New Reno and many others.

I think that with QUIC one should do a full split and do the network congestion control on the QUIC connection level. The congestion control would then be some low latency version, perhaps BBRv2? or something similar, I am not sure that the network congestion control in SCReAM is the idea choice here as it is quite a lot tailored for RTP media.

We would have tried BBR(v2) if that was available with quic-go; on our
list, but there is only so much you can do at time :-)

The media rate control is done on the stream level and is then subject to stream priority. This should give a more clean split of functionality.

Right, we are currently exploring the basic combinations with lots of
tracing and try to understand who impacts whom and what and try to
disentangle implication specifics from protocol aspects.  So take this
as a first step of a more comprehensive evaluation of where we are.

A next step is understanding how you make the two work together so that
you can preserve the fact that QUIC congestion controls everything it
sends; this will then go more into a bit of integration and API
discussion.

My SCReAM experience is that one need to leak some of the congestion signals from the connection level congestion control up to the stream rate control, to make the whole thing responsive enough. In the SCReAM code one can see that the exotic variable queueDelayTrend as well as ECN marks and loss events are used for this purpose. I believe that something like that is needed for an RTP (or whatever low latency) media over QUIC. I believe that it is necessary to leak congestion information from the connection level up to the stream level, especially to be able to exploit L4S fully, even though it is a bit of a protocol layer violation.

We are all in for leaking (well: sharing) useful information, and one of
the main questions we tried to address is how much RTCP signaling for CC
you would need; well, and it seems we can do already pretty well with
what QUIC has built in.

This helps using RTP with one of its congestion control algorithms on top, but we hope it could also help understand what you'll need to do
an RTP++ (or: MOQ) on top of QUIC without all the legacy baggage (if
that turns out to be useful).

+ Stream prioritization : … is a problematic area, especially if one stream is low latency video and another stream is a large chunk of data for e.g. a large web page. With a simple round robin scheduler, the stream with the large chunk of data will easily win because it is quite likely to always have data to transmit. So some WRR is needed. I have even had problems with the algorithm in SCReAM that prioritizes between two cameras/video coders, this because the two cameras see different views and thus provide differing information content/compression need.

This is an interesting data point I'd love to learn more about.  Not
surprised that scheduling would turn out to be one of the harder nuts to
crack.

+ Page 18 : Inferring the receive timestamp. What is suspect is that you will essentially halve the estimated queue delay (I assume here that the reverse path is uncongested). One alternative could be to compute
receive-ts = send-ts + latest_rtt + min_rtt
where min_rtt is the min RTT over a given time interval

That could work.  Still an area of experimentation (min_rtt may have its
own issues but that would remain to be seen).  QUIC rx timestamps may
help us further. So far, we mostly investigated different ways of
interpolating; tried different alternatives.

Best,
Jörg

Reply via email to