Hi

Thanks for the pointer to the RITE paper, will read it carefully. 
Some comments on HAS or DASH
The HAS behavior when subject to different AQMs and other competing background 
traffic, depends heavily the rate control algorithm. 
Investigations we have done and also e.g. the Netflix papers and lately also 
the BOLA paper (link below) shows that rate based algorithms are more easily 
starved out by competing traffic than the buffer level based algorithms. The 
bufferbased algorithms (Netflix, BOLA) are more opportunistic and TCP is more 
allowed to work like large file downloads when the links are fully utilized. 
Rate based algorithms on the other hand can more easily end up in a vicious 
circle, a low download rate is detected, so the next segment requested is with 
a reduced rate, other traffic will grab a larger share, and this repeats itself 
until the lowest rate is reached.
This is elaborated upon in the paper "A Buffer-Based Approach to Rate 
Adaptation: Evidence from a Large Video Streaming Service" by Te-Yuan Huang 
et.al.
I don't have full insight how MS Silverlight operates so I cannot quantify it 
is rate based or buffer based.

BOLA : http://arxiv.org/pdf/1601.06748.pdf 

/Ingemar

> -----Original Message-----
> From: Fred Baker (fred) [mailto:f...@cisco.com]
> Sent: den 2 mars 2016 19:09
> To: Dave Täht
> Cc: aqm@ietf.org; bl...@lists.bufferbloat.net
> Subject: Re: [aqm] [Bloat] review: Deployment of RITE mechanisms, in use-
> case trial testbeds report part 1
> 
> 
> > On Feb 27, 2016, at 11:04 AM, Dave Täht <d...@taht.net> wrote:
> >
> > https://reproducingnetworkresearch.wordpress.com/2014/06/03/cs244-
> 14-confused-timid-and-unstable-picking-a-video-streaming-rate-is-hard/
> >
> >>   o the results are very poor with a particular popular AQM
> >
> > Define "very poor". ?
> 
> Presuming this is Adaptive Bitrate Video, as in Video-in-TCP, we (as in Cisco
> engineers, not me personally; you have met them) have observed this as
> well. Our belief is that this is at least in part a self-inflicted wound; 
> when the
> codec starts transmission on any four second segment except the first, there
> is no slow-start phase because the TCP session is still open (and in the case 
> of
> some services, there are several TCP sessions open and the application
> chooses the one with the highest cwnd value). You can now think of the
> behavior of the line as repeating a four phase sequence: nobody is talking,
> then one is talking, then both are, and then the other is talking. When only
> one is talking, whichever it is, its cwnd value is slowing increasing - 
> especially
> if cwnd*mss/rtt < bottleneck line rate, minimizing RTT. At the start of the
> "both are talking" phase, the one already talking has generally found a cwnd
> value that fills the line and its RTT is slowly increasing. The one starting 
> sends
> a burst of cwnd packets, creating an instant queue and often causing one or
> both to drop a packet - reducing their respective cwnd values. Depending on
> the TCP implementation in question at the sender, if the induced drop isn't a
> single packet but is two or three, that can make the affected session pause
> for as many RTO timeouts (Reno), RTTs (New Reno), or at least retransmit
> the lost packets in the subsequent RTT and then reduce cwnd by at least that
> amount (cubic) and maybe half (SACK).
_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to