Hi,
What I am trying to understand is, what is the motivation behind running
real time congestion control like SCReAM over QUIC congestion control?
The results (as Ingemar also mentioned) are not encouraging.
we should clarify that this wasn't a design choice but rather part of
systematically looking at the four combinations you get and then see
what happens to each one of them (we didn't expect this one to fare
particularly well but we were initially a bit surprised how poorly it
performed).
If we want to use a single QUIC connection for media (audio/video using
RTP) and other reliable streams, then would it be better to not use QUIC
CC for media streams and only use it for reliable streams? Obviously
this will violate the current spec which applies congestion control on
connection level. But maybe this use case can be specialized.
This would indeed be one option in the (more desirable?) design space:
the question is if you should allow libraries out there without
congestion control just because something claims it's real-time media
and does its own.
Somebody may have mentioned the circuit breaker last week in some
context (too many slots, sorry).
Indeed, one idea could be having a QUIC "enforce" a limit that it
considers acceptable for the current connection and provide the
RTP CC with the necessary parameters to come to a meaningful rate
itself; as long as the offered load from the RTP CC fits the
enveloped computed by QUIC CC, the DATAGRAMs could just flow;
above that rate, queuing or dropping (or local ECN-style signals)
could follow.
It remains to be seen if shared congestion control between real-time
flows, other datagrams, and regular QUIC streams can be done in a
sensible manner, with acceptable complexity and little brittleness.
And if there is a strong use case for such. This was quite a bit
discussed in the MOQ side meeting.
+ Split of network congestion control and media rate control : QUIC
already today has the congestion control on the connection level, it
is then up to the individual streams to deliver media, subject to the
individual stream priorities. SCReAM is quite similar in that respect,
one difference is perhaps the implementation of the media rate control.
I think that with QUIC one should do a full split and do the network
congestion control on the QUIC connection level. The congestion
control would then be some low latency version, perhaps BBRv2? or
something similar, I am not sure that the network congestion control
in SCReAM is the idea choice here as it is quite a lot tailored for
RTP media.
The impact of cascading two congestion controllers (with different input
and output parameters) has not been studied extensively yet. And is a
real time CC like SCReAM by itself not enough to control the congestion
in the network? In other words, does it need another congestion
controller to make sure that the real time data doesn’t cause more
congestion in the network?
Right. The main question is: should a QUIC connection trust the
arbitrary datagram source or should it check. Given that all sources
(datagrams and otherwise) would often be part of the same application,
there is probably not much point in trying to cheat on itself. Maybe
some sanity checks would make sense, paired with mild adjustments of
the share given to the reliable streams as the codec source traffic
won't be able to follow exactly a given target data rate.
My SCReAM experience is that one need to leak some of the congestion
signals from the connection level congestion control up to the stream
rate control, to make the whole thing responsive enough. In the SCReAM
code one can see that the exotic variable queueDelayTrend as well as
ECN marks and loss events are used for this purpose. I believe that
something like that is needed for an RTP (or whatever low latency)
media over QUIC. I believe that it is necessary to leak congestion
information from the connection level up to the stream level,
especially to be able to exploit L4S fully, even though it is a bit of
a protocol layer violation.
We absolutely should allow sharing of network events, like RTT, ECN,
packet loss from QUIC to RTP. Not sure how it is protocol layer violation.
Yes.
+ Page 18 : Inferring the receive timestamp. What is suspect is that
you will essentially halve the estimated queue delay (I assume here
that the reverse path is uncongested). One alternative could be to
compute
receive-ts = send-ts + latest_rtt + min_rtt
where min_rtt is the min RTT over a given time interval
You are right that halving latest_rtt / 2 may not be accurate if the
reverse path is not congested, but there is no guarantee for that. So, a
more accurate way to compute receive-ts is to use One way timestamps
which can be added to QUIC as new Frames.
As long as the metric is taken for what it actually is, a rough
approximation, we can probably work with a number of ways to measure.
More to experiment.
Best,
Jörg
On Nov 12, 2021, at 8:28 AM, Ingemar Johansson S
<[email protected]
<mailto:[email protected]>> wrote:
Hi Jörg, Mathis + others
It was nice to learn about your activity to try and use SCReAM as
example algorithm to integrate with QUIC. Pages 14-25 in
https://datatracker.ietf.org/meeting/112/materials/slides-112-avtcore-ietf-112-avtcore-03
<https://datatracker.ietf.org/meeting/112/materials/slides-112-avtcore-ietf-112-avtcore-03>
Did you use the new gsteamer plugin
fromhttps://github.com/EricssonResearch/scream/tree/master/gstscream
<https://github.com/EricssonResearch/scream/tree/master/gstscream> ?
Observations/Comments:
+ SCReAM + Reno : Strange that the throughput dropped like that but
perhaps an unlucky outcome of two cascaded congestion controls.
+ Split of network congestion control and media rate control : QUIC
already today has the congestion control on the connection level, it
is then up to the individual streams to deliver media, subject to the
individual stream priorities. SCReAM is quite similar in that respect,
one difference is perhaps the implementation of the media rate control.
I think that with QUIC one should do a full split and do the network
congestion control on the QUIC connection level. The congestion
control would then be some low latency version, perhaps BBRv2? or
something similar, I am not sure that the network congestion control
in SCReAM is the idea choice here as it is quite a lot tailored for
RTP media.
The media rate control is done on the stream level and is then subject
to stream priority. This should give a more clean split of functionality.
My SCReAM experience is that one need to leak some of the congestion
signals from the connection level congestion control up to the stream
rate control, to make the whole thing responsive enough. In the SCReAM
code one can see that the exotic variable queueDelayTrend as well as
ECN marks and loss events are used for this purpose. I believe that
something like that is needed for an RTP (or whatever low latency)
media over QUIC. I believe that it is necessary to leak congestion
information from the connection level up to the stream level,
especially to be able to exploit L4S fully, even though it is a bit of
a protocol layer violation.
+ Stream prioritization : … is a problematic area, especially if one
stream is low latency video and another stream is a large chunk of
data for e.g. a large web page. With a simple round robin scheduler,
the stream with the large chunk of data will easily win because it is
quite likely to always have data to transmit. So some WRR is needed. I
have even had problems with the algorithm in SCReAM that prioritizes
between two cameras/video coders, this because the two cameras see
different views and thus provide differing information
content/compression need.
+ Page 18 : Inferring the receive timestamp. What is suspect is that
you will essentially halve the estimated queue delay (I assume here
that the reverse path is uncongested). One alternative could be to compute
receive-ts = send-ts + latest_rtt + min_rtt
where min_rtt is the min RTT over a given time interval
Regards
/Ingemar