Hi Jörg

Please see inline

/Ingemar

> -----Original Message-----
> From: Joerg Ott <[email protected]>
> Sent: den 15 november 2021 21:19
> To: Ingemar Johansson S
> <[email protected]>; [email protected]; IETF
> QUIC WG <[email protected]>
> Cc: Ingemar Johansson S <[email protected]>;
> [email protected]
> Subject: Re: RTP over QUIC experiments
> 
> Hi Ingmar,
> 
> On 12.11.21 17:28, Ingemar Johansson S wrote:
> > Hi Jörg, Mathis + others
> >
> > It was nice to learn about your activity to try and use SCReAM as
> > example algorithm to integrate with QUIC. Pages 14-25 in
> > https://datatracker.ietf.org/meeting/112/materials/slides-112-avtcore-
> > ietf-112-avtcore-03
> > <https://datatracker.ietf.org/meeting/112/materials/slides-112-avtcore
> > -ietf-112-avtcore-03>
> >
> > Did you use the new gsteamer plugin from
> > https://protect2.fireeye.com/v1/url?k=08572266-57cc1b66-085762fd-86fc6
> > 812c361-628ef39f33771783&q=1&e=e4b293a8-5f25-4c05-9d2e-
> f15862439753&u=
> >
> https%3A%2F%2Fgithub.com%2FEricssonResearch%2Fscream%2Ftree%2Fm
> aster%2
> > Fgstscream
> > <https://protect2.fireeye.com/v1/url?k=6c2bb408-33b08d08-6c2bf493-
> 86fc6812c361-ff87cf79fe3fb90c&q=1&e=e4b293a8-5f25-4c05-9d2e-
> f15862439753&u=https%3A%2F%2Fgithub.com%2FEricssonResearch%2Fscre
> am%2Ftree%2Fmaster%2Fgstscream>  ?
> 
> Well, we use your C++ version (forked a bit ago), not the plugin you refer to.
> This experiment has already been ongoing for some time.
> 
> > Observations/Comments:
> >
> > + SCReAM + Reno : Strange that the throughput dropped like that but
> > perhaps an unlucky outcome of two cascaded congestion controls.
> 
> Nested control loops may not play out that well, and this seems just
> one artifact of this.
> 
> > + Split of network congestion control and media rate control : QUIC
> > already today has the congestion control on the connection level, it is
> > then up to the individual streams to deliver media, subject to the
> > individual stream priorities. SCReAM is quite similar in that respect,
> > one difference is perhaps the implementation of the media rate control.
> 
> It is but it attends to the specific needs of real-time media, which
> cannot really be said for New Reno and many others.


[IJ] Yes, one definitely need some kind of a low latency network congestion 
control, I guess today the lowest hanging fruit is BBRv1 or v2

> 
> > I think that with QUIC one should do a full split and do the network
> > congestion control on the QUIC connection level. The congestion control
> > would then be some low latency version, perhaps BBRv2? or something
> > similar, I am not sure that the network congestion control in SCReAM is
> > the idea choice here as it is quite a lot tailored for RTP media.
> 
> We would have tried BBR(v2) if that was available with quic-go; on our
> list, but there is only so much you can do at time :-)
[IJ] OK, understand the problem 😊

> 
> > The media rate control is done on the stream level and is then subject
> > to stream priority. This should give a more clean split of functionality.
> 
> Right, we are currently exploring the basic combinations with lots of
> tracing and try to understand who impacts whom and what and try to
> disentangle implication specifics from protocol aspects.  So take this
> as a first step of a more comprehensive evaluation of where we are.
[IJ] OK
> 
> A next step is understanding how you make the two work together so that
> you can preserve the fact that QUIC congestion controls everything it
> sends; this will then go more into a bit of integration and API
> discussion.
> 
> > My SCReAM experience is that one need to leak some of the congestion
> > signals from the connection level congestion control up to the stream
> > rate control, to make the whole thing responsive enough. In the SCReAM
> > code one can see that the exotic variable queueDelayTrend as well as ECN
> > marks and loss events are used for this purpose. I believe that
> > something like that is needed for an RTP (or whatever low latency) media
> > over QUIC. I believe that it is necessary to leak congestion information
> > from the connection level up to the stream level, especially to be able
> > to exploit L4S fully, even though it is a bit of a protocol layer violation.
> 
> We are all in for leaking (well: sharing) useful information, and one of
> the main questions we tried to address is how much RTCP signaling for CC
> you would need; well, and it seems we can do already pretty well with
> what QUIC has built in.
> 
> This helps using RTP with one of its congestion control algorithms on
> top, but we hope it could also help understand what you'll need to do
> an RTP++ (or: MOQ) on top of QUIC without all the legacy baggage (if
> that turns out to be useful).
[IJ] OK, sounds good, 

> 
> > + Stream prioritization : … is a problematic area, especially if one
> > stream is low latency video and another stream is a large chunk of data
> > for e.g. a large web page. With a simple round robin scheduler, the
> > stream with the large chunk of data will easily win because it is quite
> > likely to always have data to transmit. So some WRR is needed. I have
> > even had problems with the algorithm in SCReAM that prioritizes between
> > two cameras/video coders, this because the two cameras see different
> > views and thus provide differing information content/compression need.
> 
> This is an interesting data point I'd love to learn more about.  Not
> surprised that scheduling would turn out to be one of the harder nuts to
> crack.
[IJ] I believe one can find some evidence of the problem in the SCReAM code. 
The stream prioritization is implemented as a WRR scheduler and works so-so 
(https://github.com/EricssonResearch/scream/blob/master/code/ScreamTx.cpp#L1682)..
 Still more code is needed to ensure the given priority 
(https://github.com/EricssonResearch/scream/blob/master/code/ScreamTx.cpp#L1651),
 the latter code is run every 1s when the link is congested. Without this extra 
adjustment I had the problem that the rear camera in our 5G toy car dropped 
down to real low bitrates. It is possible that one could do this smarter.

> 
> > + Page 18 : Inferring the receive timestamp. What is suspect is that you
> > will essentially halve the estimated queue delay (I assume here that the
> > reverse path is uncongested). One alternative could be to compute
> > receive-ts = send-ts + latest_rtt + min_rtt
> > where min_rtt is the min RTT over a given time interval
> 
> That could work.  Still an area of experimentation (min_rtt may have its
> own issues but that would remain to be seen).  QUIC rx timestamps may
> help us further. So far, we mostly investigated different ways of
> interpolating; tried different alternatives.
> 
> Best,
> Jörg

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to