Re: [aqm] Sane analysis of typical traffic

2014-07-17 Thread Akhtar, Shahid (Shahid)
Dave,

Web traffic was part of the study - you should be able to see page load times 
on slide 12

Lower levels of traffic were also tested (slides 11,12), however the highest 
impact of AQM was observed at high traffic levels

We included the largest components of Internet traffic (HAS, Web Traffic, Video 
Progressive download) which constitute over 75% of all traffic 
(https://www.sandvine.com/downloads/general/global-internet-phenomena/2014/1h-2014-global-internet-phenomena-report.pdf)
 at peak hour

The study presented in Nov was our initial work and is not the only component 
in development of any configuration guidelines

-Shahid.

-Original Message-
From: Dave Taht [mailto:dave.t...@gmail.com] 
Sent: Tuesday, July 15, 2014 4:00 PM
To: Akhtar, Shahid (Shahid)
Cc: Fred Baker (fred); John Leslie; aqm@ietf.org
Subject: Sane analysis of typical traffic

changing the title as this is not relevant to the aqm document...

... but to an attitude that is driving me absolutely crazy.

On Tue, Jul 15, 2014 at 10:46 AM, Akhtar, Shahid (Shahid) 
shahid.akh...@alcatel-lucent.com wrote:
 Dave,

 The message of the results that we presented in November is that it is 
 possible, with currently deployed access hardware, to configure RED so that 
 it consistently improves the end user experience of common network services 
 over Tail-Drop (which is most often configured), and that this improvement 
 can be achieved with a fixed set of RED configuration guidelines.

 We did not run experiments with sfq_codel because it is not deployed in 
 access networks today. We ran experiments with plain CoDel to understand the 
 difference between a well-configured RED and a more recent single-bucket AQM 
 in our target scenarios, and as reported, didn't observe significant 
 differences in application QoE.

Your application was a bunch of video streams. Not web traffic, not voip, 
not, gaming, not bittorrent, not a family of four doing a combination of these 
things, nor a small business that isn't going to use HAS at all.

Please don't over generalize your results. RED proven suitable for family of 
couch potatoes surfing the internet and watching 4 movies at once over the 
internet but not 5, at 8mbit/sec might have been a better title for this paper.

In this fictional family, just one kid under the stair, trying to do something 
useful, interactive and/or fun, can both wreck the couch potatoes' internet 
experience, and have his own, wrecked also.


 Additional inline clarifications below.

 -Shahid.

 -Original Message-
 From: Dave Taht [mailto:dave.t...@gmail.com]
 Sent: Monday, July 14, 2014 2:00 PM
 To: Akhtar, Shahid (Shahid)
 Cc: Fred Baker (fred); John Leslie; aqm@ietf.org
 Subject: Re: [aqm] Obsoleting RFC 2309

 On Mon, Jul 14, 2014 at 11:08 AM, Akhtar, Shahid (Shahid) 
 shahid.akh...@alcatel-lucent.com wrote:
 Hi Fred, All,

 Let me an additional thought to this issue.

 Given that (W)RED has been deployed extensively in operators' networks, and 
 most vendors are still shipping equipment with (W)RED, concern is that 
 obsoleting 2309 would discourage research on trying to find good 
 configurations to make (W)RED work.

 We had previously given a presentation at the ICCRG on why RED can still 
 provide value to operators 
 (http://www.ietf.org/proceedings/88/slides/slides-88-iccrg-0.pdf). We have a 
 paper at Globecom 2014 that explains this study much better, but I cannot 
 share a link to it until the proceedings are available.

 My problem with the above preso and no doubt the resulting study is 
 that it doesn't appear cover the classic, most basic, bufferbloat 
 scenario, which is

 1 stream up, 1 stream down, one ping (or some form of voip-like traffic) 
 and usually on an edge network with asymmetric bandwidth.

Two additional analyses of use from the download perspective might be Arris's 
analysis of the benefits of red and fq over cable head ends:

http://snapon.lab.bufferbloat.net/~d/trimfat/Cloonan_Paper.pdf

and the cable labs work which focused more on the effects of traffic going 
upstream which has been discussed fairly extensively here.

 SA: We tried to cover the typical expected traffic over the Internet.

I don't know where you get your data, but my measured edge traffic looks 
nothing like yours. Sure bandwidth wise, theres the netflix spike 3 hours out 
of the day but the rest sure isn't HAS.


Most of the traffic is now HAS traffic (as per the sandvine report), so if 
only a single stream is present, it is likely to be HAS.

The closest approximation of a continuous TCP stream, as you mention, would be 
a progressive download which can last long enough to look continuous. These 
were modeled together with other types of traffic.

You keep saying, download, download, download. I am saying merely please ALWAYS 
try an upload at the same time you are testing downloads
- be it videoconferencing (which can easily use up that 1.6mbit link), a 
youtube upload, a rsync backup, a scp

Re: [aqm] Obsoleting RFC 2309

2014-07-15 Thread Akhtar, Shahid (Shahid)
Dave,

The message of the results that we presented in November is that it is 
possible, with currently deployed access hardware, to configure RED so that it 
consistently improves the end user experience of common network services over 
Tail-Drop (which is most often configured), and that this improvement can be 
achieved with a fixed set of RED configuration guidelines. 

We did not run experiments with sfq_codel because it is not deployed in access 
networks today. We ran experiments with plain CoDel to understand the 
difference between a well-configured RED and a more recent single-bucket AQM in 
our target scenarios, and as reported, didn't observe significant differences 
in application QoE.

Additional inline clarifications below.

-Shahid.

-Original Message-
From: Dave Taht [mailto:dave.t...@gmail.com] 
Sent: Monday, July 14, 2014 2:00 PM
To: Akhtar, Shahid (Shahid)
Cc: Fred Baker (fred); John Leslie; aqm@ietf.org
Subject: Re: [aqm] Obsoleting RFC 2309

On Mon, Jul 14, 2014 at 11:08 AM, Akhtar, Shahid (Shahid) 
shahid.akh...@alcatel-lucent.com wrote:
 Hi Fred, All,

 Let me an additional thought to this issue.

 Given that (W)RED has been deployed extensively in operators' networks, and 
 most vendors are still shipping equipment with (W)RED, concern is that 
 obsoleting 2309 would discourage research on trying to find good 
 configurations to make (W)RED work.

 We had previously given a presentation at the ICCRG on why RED can still 
 provide value to operators 
 (http://www.ietf.org/proceedings/88/slides/slides-88-iccrg-0.pdf). We have a 
 paper at Globecom 2014 that explains this study much better, but I cannot 
 share a link to it until the proceedings are available.

My problem with the above preso and no doubt the resulting study is that it 
doesn't appear cover the classic, most basic, bufferbloat scenario, which is

1 stream up, 1 stream down, one ping (or some form of voip-like traffic) and 
usually on an edge network with asymmetric bandwidth.

SA: We tried to cover the typical expected traffic over the Internet. Most of 
the traffic is now HAS traffic (as per the sandvine report), so if only a 
single stream is present, it is likely to be HAS. The closest approximation of 
a continuous TCP stream, as you mention, would be a progressive download which 
can last long enough to look continuous. These were modeled together with other 
types of traffic.

It's not clear from the study that this is a 8mbit down 1mbit up DSL network 
(?), 

SA: In the study presented, It was 8M down and 1.6M up - slide 9

nor is it clear if RED is being applied in both directions or only one 
direction onl?  

SA: AQMs (including RED) were only applied in downstream direction - slide 9

(and the results you get from an asymmetric network are quite interesting, 
particularly in the face of any cross traffic at all)

SA: I am not sure what you mean by cross-traffic, the DSL link only goes to one 
residence (or business). The typical traffic to that was modeled.

And I do keep hoping that someone will do a study of how prevalent fq is on dsl 
devices, I keep accumulating anecdotal data that it's fairly common.

 One of the major reasons why operators chose not to deploy (W)RED was a 
 number of studies and research which gave operators conflicting messages on 
 the value of (W)RED and appropriate parameters to use. Some of these are 
 mentioned in the presentation above.

 In it we show that the previous studies which showed low value for RED used 
 web traffic which had very small file sizes (of the order of 5-10 packets), 
 which reduces the effectives of all AQMs which work by dropping or ECN 
 marking of flows to indicate congestion. Today's traffic is composed of 
 mostly multi-media traffic like HAS or video progressive download which has 
 much larger file sizes and can be controlled much better with AQMs and in our 
 research we show that RED can be quite effective with this traffic, with 
 little tuning needed for typical residential access flows.

I tend to disagree with this, (well, not the trendline to big video
flows) but my own studies are focused on retaining high performance gaming, 
voip, and videoconferencing traffic in the face of things like HAS video 
traffic. I would be much happier about your study had you also run that sort of 
traffic while testing your video downloads and aqms... and why is it so hard to 
get folk to try sfq_codel, etc, while they try everything else?

SA: We did not model smaller flows (in terms of throughput). Small UDP flows, 
which I assume all/most of the above would fall in, would benefit significantly 
with RED - slide 13 shows much lower average queue length with RED. Other 
studies have shown that small UDP flows with lots of TCP traffic benefit 
significantly with RED achieving lower packet loss (M. Mays et al, “Reasons not 
to deploy RED”). 


 Prefer John's proposal of updating 2309 rather than obsoleting, but if we can 
 have some text in Fred's

Re: [aqm] Obsoleting RFC 2309

2014-07-15 Thread Akhtar, Shahid (Shahid)
Fred,

Per you last text below, are you asking for a short informational draft where 
we describe our experiments and summarize our configuration guidelines 
specifically for RED in access downlinks. Is this what you mean by documenting 
WRED?

-Shahid.

-Original Message-
From: aqm [mailto:aqm-boun...@ietf.org] On Behalf Of Fred Baker (fred)
Sent: Monday, July 14, 2014 1:23 PM
To: Akhtar, Shahid (Shahid)
Cc: John Leslie; aqm@ietf.org
Subject: Re: [aqm] Obsoleting RFC 2309


On Jul 14, 2014, at 11:08 AM, Akhtar, Shahid (Shahid) 
shahid.akh...@alcatel-lucent.com wrote:

 Hi Fred, All,
 
 Let me an additional thought to this issue.
 
 Given that (W)RED has been deployed extensively in operators' networks, and 
 most vendors are still shipping equipment with (W)RED, concern is that 
 obsoleting 2309 would discourage research on trying to find good 
 configurations to make (W)RED work.

Well, note that we're not saying to pull RED out of the network; we're saying 
to not make it the default. Note that even in the networks you mention, (W)RED 
is not the default configuration; you have to give it several parameters, and 
therefore have to actively turn it on.

 We had previously given a presentation at the ICCRG on why RED can still 
 provide value to operators 
 (http://www.ietf.org/proceedings/88/slides/slides-88-iccrg-0.pdf). We have a 
 paper at Globecom 2014 that explains this study much better, but I cannot 
 share a link to it until the proceedings are available.
 
 One of the major reasons why operators chose not to deploy (W)RED was a 
 number of studies and research which gave operators conflicting messages on 
 the value of (W)RED and appropriate parameters to use. Some of these are 
 mentioned in the presentation above. 
 
 In it we show that the previous studies which showed low value for RED used 
 web traffic which had very small file sizes (of the order of 5-10 packets), 
 which reduces the effectives of all AQMs which work by dropping or ECN 
 marking of flows to indicate congestion. Today's traffic is composed of 
 mostly multi-media traffic like HAS or video progressive download which has 
 much larger file sizes and can be controlled much better with AQMs and in our 
 research we show that RED can be quite effective with this traffic, with 
 little tuning needed for typical residential access flows.
 
 Prefer John's proposal of updating 2309 rather than obsoleting, but if we can 
 have some text in Fred's draft acknowledging the large deployment of (W)RED 
 and the need to still find good configurations - that may work. I can 
 volunteer to provide that text.

The existing draft doesn't mention any specific AQM algorithms. It seems to me 
that the more consistent approach would be to write a short draft documenting 
WRED, that the WG could pass along as informational or experimental on the 
basis of not meeting the requirements of being self-configuring/tuning, at the 
same time as it passes along others as PS or whatever.


 -Shahid.
 
 -Original Message-
 From: aqm [mailto:aqm-boun...@ietf.org] On Behalf Of Fred Baker (fred)
 Sent: Monday, July 14, 2014 2:06 AM
 To: John Leslie
 Cc: aqm@ietf.org
 Subject: Re: [aqm] Obsoleting RFC 2309
 
 
 On Jul 3, 2014, at 10:22 AM, John Leslie j...@jlc.net wrote:
 
 It would be possible for someone to argue that restating a 
 recommendation from another document weakens both statements; but I
 disagree: We should clearly state what we mean in this document, and 
 I believe this wording does so.
 
 The argument for putting it in there started from the fact that we are 
 obsoleting 2309, as stated in the charter. I would understand a document that 
 updates 2309 to be in a strange state if 2309 is itself made historic or 
 obsolete. So we carried the recommendation into this document so it wouldn't 
 get lost.
 
 ___
 aqm mailing list
 aqm@ietf.org
 https://www.ietf.org/mailman/listinfo/aqm

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Obsoleting RFC 2309

2014-07-14 Thread Akhtar, Shahid (Shahid)
Hi Fred, All,

Let me an additional thought to this issue.

Given that (W)RED has been deployed extensively in operators' networks, and 
most vendors are still shipping equipment with (W)RED, concern is that 
obsoleting 2309 would discourage research on trying to find good configurations 
to make (W)RED work.

We had previously given a presentation at the ICCRG on why RED can still 
provide value to operators 
(http://www.ietf.org/proceedings/88/slides/slides-88-iccrg-0.pdf). We have a 
paper at Globecom 2014 that explains this study much better, but I cannot share 
a link to it until the proceedings are available.

One of the major reasons why operators chose not to deploy (W)RED was a number 
of studies and research which gave operators conflicting messages on the value 
of (W)RED and appropriate parameters to use. Some of these are mentioned in the 
presentation above. 

In it we show that the previous studies which showed low value for RED used web 
traffic which had very small file sizes (of the order of 5-10 packets), which 
reduces the effectives of all AQMs which work by dropping or ECN marking of 
flows to indicate congestion. Today's traffic is composed of mostly multi-media 
traffic like HAS or video progressive download which has much larger file sizes 
and can be controlled much better with AQMs and in our research we show that 
RED can be quite effective with this traffic, with little tuning needed for 
typical residential access flows.

Prefer John's proposal of updating 2309 rather than obsoleting, but if we can 
have some text in Fred's draft acknowledging the large deployment of (W)RED and 
the need to still find good configurations - that may work. I can volunteer to 
provide that text.

-Shahid.

-Original Message-
From: aqm [mailto:aqm-boun...@ietf.org] On Behalf Of Fred Baker (fred)
Sent: Monday, July 14, 2014 2:06 AM
To: John Leslie
Cc: aqm@ietf.org
Subject: Re: [aqm] Obsoleting RFC 2309


On Jul 3, 2014, at 10:22 AM, John Leslie j...@jlc.net wrote:

 It would be possible for someone to argue that restating a 
 recommendation from another document weakens both statements; but I 
 disagree: We should clearly state what we mean in this document, and I 
 believe this wording does so.

The argument for putting it in there started from the fact that we are 
obsoleting 2309, as stated in the charter. I would understand a document that 
updates 2309 to be in a strange state if 2309 is itself made historic or 
obsolete. So we carried the recommendation into this document so it wouldn't 
get lost.

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] Obsoleting RFC 2309

2014-07-02 Thread Akhtar, Shahid (Shahid)
Hi Wes,

Can you share the update/text that John Leslie had suggested which Fred 
mentions in his comment.

Thanks,

-Shahid.

-Original Message-
From: aqm [mailto:aqm-boun...@ietf.org] On Behalf Of Wesley Eddy
Sent: Tuesday, July 01, 2014 4:27 PM
To: aqm@ietf.org
Subject: [aqm] Obsoleting RFC 2309

There has been a bit of discussion last week about 
draft-ietf-aqm-recommendation and how to improve the text near the beginning, 
that leads to and sets context for the actual recommendations.

John Leslie noticed that some of the things Bob Briscoe had mentioned stem from 
trying to work from RFC 2309 as the starting point.  We have been planning to 
Obsolete and replace 2309 with this document.  John suggested instead to let it 
live on, and have this new one only Update it, and has suggested specific 
changes that could be edited in, if this were the case.

I think we need to make a conscious on-list decision about this, and decide to 
either confirm that Obsoleting 2309 is correct, or to change course.

Others can amplify or correct these, but I think the points for each would be:

Obsoleting 2309
- 2309 was an IRTF document from a closed RG, and we now can make
  a stronger statement as an IETF group with a BCP
- 2309 is a bit RED-centric, and we now think that people should
  be looking at things other than RED

Not-Obsoleting 2309 (e.g. Updating 2309)
- 2309 is a snapshot in history of the E2E RG's thinking
- 2309 is mostly oriented towards AQM as a mitigation for congestion
  collapse, whereas now we're more interested in reducing latency

Please share any thoughts you have on this, and what should be done.

--
Wes Eddy
MTI Systems

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

___
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm


Re: [aqm] [AQM Evaluation Guidelines]

2014-04-15 Thread Akhtar, Shahid (Shahid)
Hi Nicholas,

Please see comments inline.

Best regards,

-Shahid.


From: Nicolas KUHN [mailto:nicolas.k...@telecom-bretagne.eu]
Sent: Tuesday, April 15, 2014 3:24 AM
To: Akhtar, Shahid (Shahid)
Cc: aqm@ietf.org; Richard Scheffenegger
Subject: Re: [aqm] [AQM Evaluation Guidelines]

Hi Shahid Akhtar, all,

After the discussions in London IETF 89, there has been lots of modifications 
to the draft organisation (as we already advertised on the list).
The authors and I just discussed your points considering the size of the files 
for the bulk transfer and the HTTP traffic.
In the current version of the draft, we consider the values mentioned in the 
cablelabs work [0].

As a result, we consider bulk TCP transfers (repeating 5MB file transmission) 
and realistic HTTP web traffic (repeated download of 700kB). As a reminder, 
please find here the comments of Shahid Akhtar regarding these values:

Bulk TCP transfer: (continuous file transmission, or repeating 5MB file 
transmission);
The largest type of traffic on the Internet that uses bulk TCP - or large files 
via TCP is progressive download. The average speed of a Youtube video is over 
3Mbps and it is on average 4.5 minutes long - so the average file size is 
considerably larger than 5MB. We have used an average of 38MB assuming 480p 
video and 4.5 minutes for some of our experiments. The inter-arrival time of 
these files should have a realistic random distribution as well as size of the 
files.

SA: The Cablelabs document as mentioned below has a set of tests for Bulk TCP 
for uploading traffic, they use a single continuous TCP flow for certain 
testcases and multiple TCP flows with 20MB or 250KB files repeatedly being sent 
upstream for other testcases. Their application is upstream traffic which 
behaves differently from downstream traffic. A Sandvine document on Internet 
traffic 
(https://www.sandvine.com/downloads/general/global-internet-phenomena/2013/exposing-the-technical-and-commercial-factors-underlying-internet-quality-of-experience.pdf)
 shows that real-time entertainment (Netflix, YouTube and others) constitute 
68% of downstream traffic, (Figure 9) of which YouTube (progessive download) is 
at 17% of total traffic. Using repeated transfers of TCP files as suggsted in 
the Cablelabs work would miss the effect of the random on-off patterns that PD 
flows generate on the downstream portion. So I would still recommend the use of 
PD (YouTube) statistics such as above for modeling the Bulk TCP traffic 
downstream traffic.
Realistic HTTP web traffic (repeated download of 700kB);
According to statistics that Google collected from 2010 about web traffic, the 
average GET file size for web traffic was only 7KB. It is very important to 
have the size of these files correct since AQMs can help transfers of small 
files when large file downloads are also occurring. The effect of AQM on a 7KB 
file would be very different than on a 700KB file. Web traffic also has a 
complex inter-arrival time relationship between the GET requests as well as the 
pages that produce these GET requests. However to make the testing simpler, a 
repeated download of a 7KB file with a Pareto distributed size and Pareto 
distributed inter-arrival time may be an OK compromise.

SA: I believe Toke has already responded that the CableLabs work mentioned 
below refers to another document, which uses 25KB for responses to web page 
GETs. This is hugely better than using repeated 700KB downloads and I am OK 
with using the model in the other document. I am still concerned that the 
average size of the reponse maybe less than 25KB, The statistics collected by 
Google (https://developers.google.com/speed/articles/web-metrics) show an 
average size of 7KB, but they are 4 years old and I cannot find a more recent 
number.

We expect to stick with the cable labs values [0]. Please let us know if you 
have any comments regarding this choice.

The Cablelabs work does not mention any HTTP adaptive streaming (HAS) traffic 
models - would that be included in the guidelines?
Kind regards,

Nicolas KUHN

[0] 
http://www.cablelabs.com/wp-content/uploads/2013/11/Active_Queue_Management_Algorithms_DOCSIS_3_0.pdf

On Feb 20, 2014, at 6:48 AM, Akhtar, Shahid (Shahid) 
shahid.akh...@alcatel-lucent.commailto:shahid.akh...@alcatel-lucent.com 
wrote:

Hi Nicholas and other authors,

Thank you for producing this document. I had some (long) comments where I have 
also suggested some solutions/text, hopefully you find them helpful. When 
quoting from the document, I have used quotes and I have listed my comments as 
1-4.


1. The number of tests recommended per AQM scheme seems to be quite large as 
well as many tests that may be best done together are done independently.

For example, there are different tests for fairness between flows (section 
3.2.3), level of traffic (mild, medium, heavy) (section 3.2.4), different types 
of bursty traffic (3.2.2) and different network

Re: [aqm] [AQM Evaluation Guidelines]

2014-02-19 Thread Akhtar, Shahid (Shahid)
Hi Nicholas and other authors,

Thank you for producing this document. I had some (long) comments where I have 
also suggested some solutions/text, hopefully you find them helpful. When 
quoting from the document, I have used quotes and I have listed my comments as 
1-4.


1. The number of tests recommended per AQM scheme seems to be quite large as 
well as many tests that may be best done together are done independently.

For example, there are different tests for fairness between flows (section 
3.2.3), level of traffic (mild, medium, heavy) (section 3.2.4), different types 
of bursty traffic (3.2.2) and different network environments (section 3.3). In 
order to obtain a realistic estimate of AQM comparison, many of these effects 
may need to be tested together. For example to increase level of traffic (in 
section 3.2.4) only long term TCP flows of 50s are proposed. It would be better 
to test the level of congestion with realistic traffic (and with realistic 
proportions) as proposed in section 3.2.4. Another example would be to test 
fairness between flows of same type - e.g. long term TCP flows when multiple 
types of TCP flows are present at a AQM controlled buffer or test fairness 
between flows of different types - e.g. long term TCP flows and web flows. 

I noticed that in section 3.3 different network environments are mentioned. I 
suggest that tests with appropriate realistic traffic be done on each of these 
enviroments with various levels of congestion. One could measure fairness from 
these tests without the need for additional tests. This may have the benefit of 
reducing overall number of tests as well.


2. There are four network enviroments mentioned in the document (section 3.3):
Wifi - in home network.
Data-Center communications.
BB wireless/satellite.
Low and high buffers

This is a good set, however, recommed to add:

Access networks - and further split access networks into DSL/FTTH networks and 
Cable access networks. Cable access networks may have much larger number of 
flows than DSL/FTTH since they are shared by a large number of customers

There is another type of network environment - narrowband wireless (mobile) 
networks where the AQM could be placed on the eNodeB, but perhaps this is a 
complicated scenario, and best left for later or in another document.


3. I noticed that the modeling of different types of realistic traffic could be 
made more accurate (section 3.2.2)

Bulk TCP transfer: (continuous file transmission, or repeating 5MB file 
transmission);
The largest type of traffic on the Internet that uses bulk TCP - or large files 
via TCP is progressive download. The average speed of a Youtube video is over 
3Mbps and it is on average 4.5 minutes long - so the average file size is 
considerably larger than 5MB. We have used an average of 38MB assuming 480p 
video and 4.5 minutes for some of our experiments. The inter-arrival time of 
these files should have a realistic random distribution as well as size of the 
files. 

Realistic HTTP web traffic (repeated download of 700kB);
According to statistics that Google collected from 2010 about web traffic, the 
average GET file size for web traffic was only 7KB. It is very important to 
have the size of these files correct since AQMs can help transfers of small 
files when large file downloads are also occurring. The effect of AQM on a 7KB 
file would be very different than on a 700KB file. Web traffic also has a 
complex inter-arrival time relationship between the GET requests as well as the 
pages that produce these GET requests. However to make the testing simpler, a 
repeated download of a 7KB file with a Pareto distributed size and Pareto 
distributed inter-arrival time may be an OK compromise.

I noticed that adaptive streaming is not modelled as a type of realistic 
traffic. It is probably the most important and largest flow on the Internet 
today - if we look at a recent Sandvine report of Internet traffic, about 75% 
of Internet traffic is real-time entertainment of which the largest component 
is adaptive streaming based video traffic. In order to simulate adaptive 
streaming traffic, one would need a client that changes video quality rates. To 
keep testing simpler, an option may be to generate realistic chunk sized files 
at regular intervals. Average netflix speeds are around 2Mbps. Typical chunk 
sizes range from 2s to 10s, but 4s may be a good average. The chunk sizes 
should be varied with some random distribution as is typically observed from 
real codecs. 


4. In section 2.2.5, QoE metrics are mentioned. It may be valuable to spell out 
some standard models or expressions to derive QoE from network parameters so 
that these QoE can be compared between AQMs. In the literature there is work 
that derives closed form expressions for QoE. The following are examples and 
suggestions:

Video - the work by Johan de Vriendt et al , Model for estimating QoE of Video 
delivered using HTTP Adaptive Streaming, 

Re: [aqm] IETF88 Fri 08Nov13 - 12:30 Regency B

2013-11-08 Thread Akhtar, Shahid (Shahid)
 to their importance to the end user.

Suggestion on best buffer sizes
Research should make suggestions on how to configure buffer sizes with each 
type of AQM (e.g. 2xBDP etc) - explaining why/how such buffer sizes improve 
end-user QoE and network health.

FB: I'm not sure that buffer sizes are specific to AQM algorithms; I'd 
entertain evidence otherwise. Buffer *thresholds* (at what point do we start 
dropping/marking traffic?) may differ between algorithms. Buffer size (how 
many bytes/packets do we allow into the queue in the worst case?) is a matter 
of the characteristics of burst behavior in a given network and the 
applications it supports. If I have, say, a Map/Reduce application that 
simultaneously asks thousands of systems a question, the queues in the 
intervening switches will need to be able to briefly absorb thousands of 
response packets. The key word here is briefly. When Van or Kathy talk about 
good queue and bad queue, they are saying that burst behavior may call for 
deep queues, but we really want the steady state to achieve 100% utilization 
with a statistically empty queue if we can possibly achieve that.

SA: In access situations (with AQM) buffer sizes can determine the size of a 
burst that the buffer can withstand and our research shows that it can directly 
impact end user QoE.

Best way to leverage deployed AQMs
Research should be done on methods or configurations that leverage deployed 
AQMs such as RED/WRED to reduce delays and lockout for typical traffic which 
require minimal effort or tuning from the operator.

FB: Not a complete sentence, but I think I understand what you're getting at. 
You would like to have research determine how to easily configure existing 
systems using the tools at hand. I'm all for it in the near term.

SA: Glad we agree on this - assume some update will be made around this topic.


-Original Message-
From: Fred Baker (fred) [mailto:f...@cisco.com]
Sent: Thursday, November 07, 2013 12:45 PM
To: Akhtar, Shahid (Shahid)
Cc: Richard Scheffenegger; aqm@ietf.org; Naeem Khademi (nae...@ifi.uio.no); 
Gorry Fairhurst; Wesley Eddy
Subject: Re: IETF88 Fri 08Nov13 - 12:30 Regency B


On Nov 7, 2013, at 8:59 AM, Akhtar, Shahid (Shahid) 
shahid.akh...@alcatel-lucent.com wrote:

 Hi All,

 Had some comments on Fred's document. I have added the comments as track 
 changes in a word document to easily see them. I used the 02 version.

 Thanks.


Permit me to put your comments in email, along with my own views. Also adding 
my co-author and the other working group chair on the CC line; if he is like 
me, he receives far too much email, and mail that is explicitly to or copies 
him bubbles higher in the column.

 4.  Conclusions and Recommendations
   [snip]
3.  The algorithms that the IETF recommends SHOULD NOT require
operational (especially manual) configuration or tuning.

 Some tuning may be required or implicitly assumed for virtually all AQMs - 
 please see my comment later.

FB: That's an opinion. One of the objectives of Van and Kathy's work, and 
separately of Rong Pan et al's work, is to design an algorithm that may have 
different initial conditions drawn from a table given the interface it finds 
itself on, but requires no manual tuning. The great failure of RED, recommended 
in RFC 2309, is not that it doesn't work when properly configured; it's that 
real humans don't have the time to properly tune it differently for each of the 
thousands of link endpoints in their networks. There is no point in changing 
away from RED if that is also true of the replacement.

SA: You argue that initial conditions determine some of the parameters of 
newer AQMs (like Codel and PIE), then those same initial conditions would also 
determine the key parameters for RED/WRED.

Can you explain further why real humans don't have the time to properly tune 
it differently for each of the thousands of link endpoints in their networks 
with a realistic example.

7.  Research, engineering, and measurement efforts are needed
regarding the design of mechanisms to deal with flows that are
unresponsive to congestion notification or are responsive, but
are more aggressive than present TCP.

   Do we want to make a suggestion on how to configure buffer sizes with 
 each type of AQM here (e.g. 2xBDP etc) or simply state that research should 
 be conducted on the best buffer sizes to use with AQM.

FB: I'm not sure that buffer sizes are specific to AQM algorithms; I'd 
entertain evidence otherwise. Buffer *thresholds* (at what point do we start 
dropping/marking traffic?) may differ between algorithms. Buffer size (how 
many bytes/packets do we allow into the queue in the worst case?) is a matter 
of the characteristics of burst behavior in a given network and the 
applications it supports. If I have, say, a Map/Reduce application that 
simultaneously asks thousands of systems a question, the queues in the 
intervening switches will need