Hey all,

I want to propose download rate-control as not just a way to avoid bufferbloat 
(https://bugzilla.mozilla.org/show_bug.cgi?id=733010), but also as a way to 
deal with competing DASH flows, to help detect link over/under-use and to help 
optimize TCP throughput.

Seems like rate control can be achieved through managing the size of SO_RCVBUF 
and rate-limiting calls to recv(). I've read multiple comments online to see if 
there's some underlying OS implementation issue I should be aware of. I don't 
see any - do any of you know of any?


RATE-LIMITING TO MITIGATE BUFFERBLOAT
Already previously discussed as a good idea here: 
https://bugzilla.mozilla.org/show_bug.cgi?id=733010

RATE-LIMITING TO AVOID OSCILLATIONS IN COMPETING FLOWS
Multiple adaptive players often result in oscillating, unfair bandwidth usage 
[1]. It seems like these oscillations are caused by the pattern of ON OFF 
downloads caused by periodic best effort HTTP requests. Rate limiting the 
downloads, and minimizing idle connection time with back-to-back requests could 
mitigate these oscillations and allow a fair convergence.

CONTROLLED RATE VARIATION TO DETECT LINK OVER-USE AND UNDER-USE
Increase the download rate gradually, and only shift up to a higher encoded 
bit-rate (identified in the DASH manifest) when the increasing rate reaches 
that value. Randell previously suggested we use something like 
(http://tools.ietf.org/html/draft-alvestrand-rtcweb-congestion-02 ). In very 
abstract terms, the receiver increases the download rate until it detects an 
overuse of the link - at this point, it reduces the bandwidth estimation and 
then the download rate. We *could* do a similar thing though rate-limiting.

(Btw, the bandwidth estimation in this algorithm is delay-based - it uses the 
assumption that increases in delay imply overuse of the link, and vice-versa. I 
think we can use other methods of bandwidth estimation if need be).

RATE LIMITING TO OPTIMIZE TCP THROUGHPUT
Other reading seems to suggest adapting in this way in the App layer should 
work well with TCP slow-start. Non-rate controlled downloads can lead to idle 
periods which can cause TCP slow-start; whereas a continual stream of HTTP 
responses at a controlled rate *could* avoid this.


So, can we do rate-limiting as above without worrying about hidden OS 
implementation issues?


Issues that are not resolved completely by this theoretical musing:
-- Would TCP really help to converge on fairly shared bandwidth for competing 
flows, or would a new oscillation or unfairness take place?
-- What about competition with non-rate-controlled streams? Or other protocols, 
e.g. DASH vs WebRTC
-- Bandwidth estimation is not discussed here but is, of course, very 
important. It will impact how well rate limiting works.
-- Media buffer underflow/overflow could be managed by adjusting the download 
rate as well. Seems like maintaining a consistent buffer size is desirable to 
smooth out short-lived variations in bandwidth capability.
-- *Could* this be implemented effectively in Necko/NSPR...

Let me know your thoughts.

Steve.

[1] Akshabi et al; "What happens when HTTP adaptive streaming players compete 
for bandwidth"
_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to