Nicolas KUHN <nicolas.k...@telecom-bretagne.eu> writes:

> and realistic HTTP web traffic (repeated download of 700kB). As a reminder,
> please find here the comments of Shahid Akhtar regarding these values:

The Cablelabs work doesn't specify web traffic as simply "repeated
downloads of 700KB", though. Quoting from [0], the actual wording is:

> "Webs" indicates the number of simultaneous web users (repeated
> downloads of a 700 kB page as described in Appendix A of [White]),

Where [White] refers to [1] which states (in the Appendix):

> The file sizes are generated via a log-normal distribution, such that
> the log10 of file size is drawn from a normal distribution with mean =
> 3.34 and standard deviation = 0.84. The file sizes (yi) are calculated
> from the resulting 100 draws (xi ) using the following formula, in
> order to produce a set of 100 files whose total size =~ 600 kB (614400
> B):

And in the main text it specifies (in section 3.2.3) the actual model
for the web traffic used:

> Model single user web page download as follows:
>
> - Web page modeled as single HTML page + 100 objects spread evenly
> across 4 servers. Web object sizes are currently fixed at 25 kB each,
> whereas the initial HTML page is 100 kB. Appendix A provides an
> alternative page model that may be explored in future work.
> 
> - Server RTTs set as follows (20 ms, 30 ms, 50 ms, 100 ms).
> 
> - Initial HTTP GET to retrieve a moderately sized object (100 kB HTML
> page) from server 1.
> 
> - Once initial HTTP GET completes, initiate 24 simultaneous HTTP GETs
> (via separate TCP connections), 6 connections each to 4 different
> server nodes
> 
> - Once each individual HTTP GET completes, initiate a subsequent GET
> to the same server, until 25 objects have been retrieved from each
> server.


Which is a pretty far cry from just saying "repeated downloads of 700
KB" and, while still somewhat bigger, matches the numbers from Google
better in terms of distribution between page sizes and other objects.
And, more importantly, it features the kind of parallelism and
interactions that a real web browser does; which, as Shahid mentioned is
(can be) quite important for the treatment it receives by an AQM.

As such I would be strongly in favour of changing the draft to actually
describe realistic web client behaviour, rather than just summarising it
as "repeated downloads of 700KB".


-Toke


[0] 
http://www.cablelabs.com/wp-content/uploads/2013/11/Active_Queue_Management_Algorithms_DOCSIS_3_0.pdf

[1] 
http://www.cablelabs.com/downloads/pubs/PreliminaryStudyOfCoDelAQM_DOCSISNetwork.pdf

[2] https://developers.google.com/speed/articles/web-metrics

Attachment: signature.asc
Description: PGP signature

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to