On Jul 2, 2014, at 9:07 AM, Akhtar, Shahid (Shahid) 
<shahid.akh...@alcatel-lucent.com> wrote:

> Hi Wes,
> 
> Can you share the update/text that John Leslie had suggested which Fred 
> mentions in his comment.
> 
> Thanks,
> 
> -Shahid.

I have attached the text John sent yesterday. It is derived from -05. I *think* 
he would like to replace sections 1-3 of draft-ietf-aqm-recommendations with 
the content of that text, and then reconsider the additional text in -06 to see 
if it is still required and if so where it might go.

1.  Introduction

   RFC 2309 introduces the concept of "Active Queue Management" and
   describes one AQM algorithm. Since 1998, other AQM algorithms
   have come into use. This document updates RFC 2309 where needed,
   and gives recommendations for future AQM algorithms.

   Section 3 of RFC 2309 describes Random Early Detection ("RED").

   Similar algorithms are specified for other non-TCP transports.

   
   RFC 2309 makes recommendations for "routers." Today these same
   recommendations apply to a number of other network devices,
   including switches, tunnel endpoints, Network Address Transform
   ("NAT") devices, and other middleboxes, which pass packets,
   whether unchanged or modified, into outgoing queues.

   A number of AQM procedures are described in the literature
   with different characteristics.  This document does not
   recommend any of them in particular, but does make
   recommendations that ideally would affect the choice of
   procedure used in a given implementation.

   Methods such as congestion exposure (ConEx) [RFC6789] offer a
   framework [CONEX] that can update network devices to alleviate
   the effects of unresponsive flows.

   The discussion in this memo applies to "best-effort" traffic, which
   is to say, traffic generated by applications that accept the
   occasional loss, duplication, or reordering of traffic in flight.  It
   also applies to other traffic, such as real-time traffic that can
   adapt its sending rate to reduce loss and/or delay.  It is most
   effective when the adaption occurs on time scales of a single Round
   Trip Time (RTT) or a small number of RTTs, for elastic traffic
   [RFC1633].

   [RFC2309] resulted from past discussions of end-to-end performance,
   Internet congestion, and Random Early Discard (RED) in the End-to-End
   Research Group of the Internet Research Task Force (IRTF).  This
   update results from experience with this and other algorithms, and
   the AQM discussion within the IETF[AQM-WG].

1.1.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in [RFC2119].

2.  The Need For Active Queue Management

   Active Queue Management (AQM) is a method that allows network devices
   to control the queue length or the mean time that a packet spends in
   a queue.  Although AQM can be applied across a range of deployment
   enviroments, the recommendations in this document are directed to use
   in the general Internet.  It is expected that the principles and
   guidance are also applicable to a wide range of environments, but may
   require tuning for specific types of link/network (e.g. to
   accommodate the traffic patterns found in data centres, the
   challenges of wireless infrastructure, or the higher delay
   encountered on satellite Internet links).

   Section 2 of RFC 2309 discusses "tail drop" -- dropping the
   most-recently-received packet, as well as "drop front on
   full" and "random-drop-on-full". The reader may wish to review
   that Section.

   Congestion control, like other end-to-end mechanisms, introduces
   a control loop between hosts.  Sessions that share a common network
   bottleneck can therefore become synchronised, introducing
   periodic disruption (e.g.  jitter/loss). "lock-out" is often also
   the result of synchronization or other timing effects.

   AQM can  be combined with a scheduling mechanism that divides
   network traffic between multiple queues (Section 2.1).

   The probability of network control loop synchronisation can be
   reduced by introducing randomness in the AQM functions used by
   network devices that trigger congestion avoidance at the sending
   host.

2.1.  AQM and Multiple Queues

   A network device may use per-flow or per-class queuing with a
   scheduling algorithm to either prioritise certain applications or
   classes of traffic, or to provide isolation between different traffic
   flows within a common class.  For example, a router may maintain per-
   flow state to achieve general fairness by a per-flow scheduling
   algorithm such as various forms of Fair Queueing (FQ) [Dem90],
   including Weighted Fair Queuing (WFQ), Stochastic Fairness Queueing
   (SFQ) [McK90] Deficit Round Robin (DRR) [Shr96] and/or a Class-Based
   Queue scheduling algorithm such as CBQ [Floyd95].  Hierarchical
   queues may also be used e.g., as a part of a Hierarchical Token
   Bucket (HTB), or Hierarchical Fair Service Curve (HFSC) [Sto97] .
   These methods are also used to realise a range of Quality of Service
   (QoS) behaviours designed to + meet the need of traffic classes (e.g.
   using the integrated or differentiated service models).

   Using a combination of AQM and scheduling between multiple
   queues has been shown to offer good results in experimental and
   some types of operational use.

2.2.  AQM and Explicit Congestion Marking (ECN)

   An AQM method may use Explicit Congestion Notification (ECN)
   [RFC3168] instead of dropping to mark packets under mild or moderate
   congestion.  ECN-marking can allow a network device to signal
   congestion at a point before a transport experiences congestion loss
   or additional queuing delay [ECN-Benefit].  Section 4.2.1 describes
   some of the benefits of using ECN with AQM.

2.3.  AQM and Buffer Size

   It is important to differentiate the choice of buffer size for a
   queue in a switch/router or other network device, and the
   threshold(s) and other parameters that determine how and when an AQM
   algorithm operates.  One the one hand, the optimum buffer size is a
   function of operational requirements and should generally be sized to
   be sufficient to buffer the largest normal traffic burst that is
   expected.  This size depends on the number and burstiness of traffic
   arriving at the queue and the rate at which traffic leaves the queue.
   Different types of traffic and deployment scenarios will lead to
   different requirements.

   AQM frees a designer from having to the limit buffer space to achieve
   acceptable performance, allowing allocation of sufficient buffering
   to satisfy the needs of the particular traffic pattern.  On the other
   hand, the choice of AQM algorithm and associated parameters is a
   function of the way in which congestion is experienced and the
   required reaction to achieve acceptable performance.  This latter
   topic is the primary topic of the following sections.

3.  Managing Aggressive Flows

   Section 4 of RFC 2309 discusses the management of aggressive flows.

   Since RFC 2309 was written in 1998, the concept of "TCP-friendly"
   has generally replaced the concept of "TCP-compatible."

   In this document a flow is known as "TCP-friendly" when it has a
   congestion response that approximates the average response expected
   of a TCP flow.  One example method of a TCP-friendly scheme is the
   TCP-Friendly Rate Control algorithm [RFC5348].  In this document, the
   term is used more generally to describe this and other algorithms
   that meet these goals.

   A TCP-friendly flow responds to congestion notification within a
   small number of path Round Trip Times (RTT), and in steady-state
   it uses no more capacity than a conformant TCP running under
   comparable conditions (drop rate, RTT, packet size, etc.).

   The User Datagram Protocol (UDP) [RFC0768] provides a minimal,
   best-effort transport to applications and upper-layer protocols
   (both simply called "applications" in the remainder of this
   document) and does not itself establish a degree of fairness [RFC5405].

   Some applications (e.g. current web browsers) open a large
   number of short TCP flows for a single session.  This can
   lead to each individual flow spending the majority of time in the
   exponential TCP slow start phase, rather than in TCP congestion
   avoidance.  The resulting traffic aggregate can therefore be much
   less responsive than a single standard TCP flow.

   An RTP/UDP video flow that uses an adaptive codec but responds
   incompletely to indications of congestion or responds over an
   excessively long time period may not be responsive to congestion
   signals in a timeframe comparable to a small number of end-to-end
   transmission delays.  However, over a longer timescale, perhaps
   seconds in duration, they con moderate their speed, or increase
   their speed if they determine capacity to be available.

   Tunneled traffic aggregates carrying multiple (short) TCP flows
   can be more aggressive than standard bulk TCP.  Applications
   (e.g. web browsers and peer-to-peer file-sharing) have exploited
   this by opening multiple connections to the same endpoint.

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to