On Tue, Apr 15, 2014 at 6:57 AM, Nicolas KUHN
<nicolas.k...@telecom-bretagne.eu> wrote:
> Thank you for detailing the content of the Cable Labs document and where 
> these 700kB come from.
> Concerning your last point:
>
>> As such I would be strongly in favour of changing the draft to actually
>> describe realistic web client behaviour, rather than just summarising it
>> as "repeated downloads of 700KB".

+100.

>
>
> I understand that it may be a drastic simplification to just summarise the 
> web client behaviour as only repeated downloads of 700kB. However, the draft 
> may not detail realistic web client behaviour: I believe that it may be out 
> of topic and the draft cannot contain such level of complexity for all the 
> covered protocols/traffic.
> I propose that the following changes:
>
> Was:
>         - Realistic HTTP web traffic (repeated download of 700kB);
> Changed by:
>         - Realistic HTTP web page downloads: the tester should at least 
> consider repeated downloads of 700kB - for a more accurate web traffic, a 
> single user web page download [White] may exploited;

> What do you think ?

An AQM evaluation guide MUST include evaluations against real traffic
patterns. Period. The
white PLT model was decent; the repeated single flow 700k download
proposal is nuts. (I can certainly see
attempting to emulate DASH traffic, however)

I have further pointed out some flaws in the white PLT model in
previous emails - notably as to the effect of not emulating DNS
traffic - and have been working towards acquiring a reasonable
distribution of DNS hit, miss, and fill numbers to plug into it for
some time. That has required work - work on finding a decent web
benchmark - and work on acquiring statistics that make sense - and
some of that work is beginning to bear fruit. Dnsmasq, for example has
sprouted the ability to collect statistics, we are trying to get the
chrome web page benchmarker working again, and so on. You ignore the
overhead of DNS lookups to your peril. There are other overheads worth
looking at, too...

Similarly I regard testing a correct emulation of bittorrent's
real-world behavior in an AQM'd environment as pretty critical.
[white] was not even close in this respect. (but it was a good first
try!)

Overall I suggest that we also adopt the same tests that other WGs are
proposing for their protocols. rmcat
had a good starter set here:

http://www.ietf.org/proceedings/89/slides/slides-89-rmcat-2.pdf




>
> Regards,
>
> Nicolas
>
> On Apr 15, 2014, at 12:28 PM, Toke Høiland-Jørgensen <t...@toke.dk> wrote:
>
>> Nicolas KUHN <nicolas.k...@telecom-bretagne.eu> writes:
>>
>>> and realistic HTTP web traffic (repeated download of 700kB). As a reminder,
>>> please find here the comments of Shahid Akhtar regarding these values:
>>
>> The Cablelabs work doesn't specify web traffic as simply "repeated
>> downloads of 700KB", though. Quoting from [0], the actual wording is:
>>
>>> "Webs" indicates the number of simultaneous web users (repeated
>>> downloads of a 700 kB page as described in Appendix A of [White]),
>>
>> Where [White] refers to [1] which states (in the Appendix):
>>
>>> The file sizes are generated via a log-normal distribution, such that
>>> the log10 of file size is drawn from a normal distribution with mean =
>>> 3.34 and standard deviation = 0.84. The file sizes (yi) are calculated
>>> from the resulting 100 draws (xi ) using the following formula, in
>>> order to produce a set of 100 files whose total size =~ 600 kB (614400
>>> B):
>>
>> And in the main text it specifies (in section 3.2.3) the actual model
>> for the web traffic used:
>>
>>> Model single user web page download as follows:
>>>
>>> - Web page modeled as single HTML page + 100 objects spread evenly
>>> across 4 servers. Web object sizes are currently fixed at 25 kB each,
>>> whereas the initial HTML page is 100 kB. Appendix A provides an
>>> alternative page model that may be explored in future work.
>>>
>>> - Server RTTs set as follows (20 ms, 30 ms, 50 ms, 100 ms).
>>>
>>> - Initial HTTP GET to retrieve a moderately sized object (100 kB HTML
>>> page) from server 1.
>>>
>>> - Once initial HTTP GET completes, initiate 24 simultaneous HTTP GETs
>>> (via separate TCP connections), 6 connections each to 4 different
>>> server nodes
>>>
>>> - Once each individual HTTP GET completes, initiate a subsequent GET
>>> to the same server, until 25 objects have been retrieved from each
>>> server.
>>
>>
>> Which is a pretty far cry from just saying "repeated downloads of 700
>> KB" and, while still somewhat bigger, matches the numbers from Google
>> better in terms of distribution between page sizes and other objects.
>> And, more importantly, it features the kind of parallelism and
>> interactions that a real web browser does; which, as Shahid mentioned is
>> (can be) quite important for the treatment it receives by an AQM.
>>
>> As such I would be strongly in favour of changing the draft to actually
>> describe realistic web client behaviour, rather than just summarising it
>> as "repeated downloads of 700KB".
>>
>>
>> -Toke
>>
>>
>> [0] 
>> http://www.cablelabs.com/wp-content/uploads/2013/11/Active_Queue_Management_Algorithms_DOCSIS_3_0.pdf
>>
>> [1] 
>> http://www.cablelabs.com/downloads/pubs/PreliminaryStudyOfCoDelAQM_DOCSISNetwork.pdf
>>
>> [2] https://developers.google.com/speed/articles/web-metrics
>> _______________________________________________
>> aqm mailing list
>> aqm@ietf.org
>> https://www.ietf.org/mailman/listinfo/aqm
>
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm



-- 
Dave Täht

NSFW: 
https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to