Hi all,

I was at a workshop on Adaptive Bit-Rate (ABR) transport lack week, hosted by 
Cisco, and attended by multiple players in the DASH/ABR space.  Here is the 
distilled version of my notes, which is still pretty long.


ON MEASURING QUALITY OF EXPERIENCE (QoE) (… TO DECIDE HOW TO ADAPT):

Determining Quality of Experience is very difficult, because it is so 
subjective. Nonetheless, anecdotal and experimental evidence suggests that 
buffering ratio seems to have the most impact, i.e. % time in buffering state, 
rate of buffer (often often you go into buffering, how often it happens). A 1% 
increase in buffering ratio reduces play time by 60%, according to Conviva. 
Additionally, viewers return more when the video is not interrupted by 
buffering. Bitrate encoding seems to only be equally impactful for live 
broadcasts. 'Time to start' impacts the overall viewing time inversely - i.e. 
start quickly

Summary to achieve high quality:
- prevent/avoid startup failures and start the video quickly (includes seeking)
- play smoothly (shift rates smoothly, gracefully), without interruptions 
(minimize buffering), at the highest bitrate possible.



ON BUFFERBLOAT:

There was, of course, widespread consensus that the cause is tail-drop queue 
management of large, best-effort flows. Consequently, the best solution had 
consensus with being deployment of AQM, although many client/application 
developers agreed that clients have to live with it and so it's not 
unsurprising that they would try to do something about it. One mitigation at 
the transport layer was TCP-Hamilton/Swinburne Delay Gradient, which tunes to 
variation in delay, and switches to loss-based congestion control if it 
determines that there is competing cross-traffic like TCP cubic.

There was some disagreement on whether applications could do anything to help 
bufferbloat, since it's the lack of AQM that is the real problem. Some did 
agree that rate-limiting was a possibility, but there was some discussion that 
this worked best when the highest rate was achieved (and primarily this would 
help avoid oscillations in competing flows). Some sort of aggressive use of the 
transport link is needed in order to get good throughput, i.e. download at a 
rate just a little more than the bitrate encoding (although this is to avoid 
oscillations).

(So, the question is will a delay-based ABR algorithm help? And how to make a 
rate-limited TCP flow aggressive enough during bandwidth estimation?)



ON CHUNK DOWNLOADS:
Clients requesting subsegments (byte-range requests) from segments (media files 
of between 2s to 10s) stored in a CDN is a proposed best practice, trying to 
balance both web cache infra and client side switching optimization.
Media encoders do not naturally optimize to DASH fixed segment time periods: 
this should also be taken into consideration in choosing chunk sizes.

(So, we should be ready to process byte-range requests for DASH at least 
corresponding to the size of DASH subsegments).



ON COMPETING FLOWS AND OSCILLATIONS:
Oscillations observed in competing flows are due to cycles of bandwidth 
over-estimation and under-estimation due to bursty, on-off flows. Solutions 
included network-based traffic engineering, avoiding idle connections (possibly 
via some sort of aggressive rate-limiting to correctly detect throughput), and 
clients not being conservative in their choice of bitrate stream. This was two 
different pieces of research and I need to spend more time to reconcile the 
idea of bandwidth over-estimation and conservative bitrate stream choices. 
However, aggressive flows seem to be at the core of a solution to achieving 
stable fairness. (See "On Bufferbloat" for aggressive rate-limiting).



ON RELATED NETWORK ISSUES THAT WE MIGHT NEED TO CONSIDER:
CDN Quality can affect buffering, and thus QoE. DASH provides an option for 
multiple BaseURLs to choose from. It's possible that Firefox could choose a 
BaseURL (CDN?) based on transport link quality measurements.
Powerboost, which is experienced at the start of transmissions needs to be 
taken into consideration in bandwidth estimation.
There is movement by companies like Cisco to reserve resources and engineer 
traffic to force ABR clients to shift streams.
SPDY and MPTCP are two transport protocols that should be considered in ABR 
research/system design, rather than just HTTP and TCP.
As these systems scale, new issues will become visible.


Thanks,
Steve.
_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to