> On 21 Mar, 2016, at 20:04, Bob Briscoe <resea...@bobbriscoe.net> wrote:
> 
> The experience that led me to understand this problem was when a bunch of 
> colleagues tried to set up a start-up (a few years ago now) to sell a range 
> of "equitable quality" video codecs (ie constant quality variable bit-rate 
> instead of constant bit-rate variable quality). Then, the first ISP they 
> tried to sell to had WFQ in its Broadband remote access servers. Even tho 
> this was between users, not flows, when video was the dominant traffic, this 
> overrode the benefits of their cool codecs (which would have delivered twice 
> as many videos with the same quality over the same capacity.

This result makes no sense.

You state that the new codecs “would have delivered twice as many videos with 
the same quality over the same capacity”, and video “was the dominant traffic”, 
*and* the network was the bottleneck while running the new codecs.

The logical conclusion must be either that the network was severely 
under-capacity and was *also* the bottleneck, only twice as hard, under the old 
codecs; or that there was insufficient buffering at the video clients to cope 
with temporary shortfalls in link bandwidth; or that demand for videos doubled 
due to the new codecs providing a step-change in the user experience (which 
feeds back into the network capacity conclusion).

In short, it was not WFQ that caused the problem.

 - Jonathan Morton

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to