3. Managing Aggressive Flows
One of the keys to the success of the Internet has been the
congestion avoidance mechanisms of TCP. Because TCP "backs off"
during congestion, a large number of TCP connections can share a
single, congested link in such a way that link bandwidth is shared
reasonably equitably among similarly situated flows. The equitable
sharing of bandwidth among flows depends on all flows running
compatible congestion avoidance algorithms, i.e., methods conformant
with the current TCP specification [RFC5681].
In this document, a flow is known as "TCP-friendly" when it has a
congestion response that approximates the average response expected
of a TCP flow. One example method of a TCP-friendly scheme is the
TCP-Friendly Rate Control algorithm [RFC5348]. In this document, the
term is used more generally to describe this and other algorithms
that meet these goals.
There are a variety of types of network flow. Some convenient
classes that describe flows are: (1) TCP-friendly flows, (2)
unresponsive flows, i.e., flows that do not slow down when congestion
occurs, and (3) flows that are responsive but are less responsive to
congestion than TCP. The last two classes contain more aggressive
flows that can pose significant threats to Internet performance.
1. TCP-friendly flows
A TCP-friendly flow responds to congestion notification within a
small number of path RTTs, and in steady-state it uses no more
capacity than a conformant TCP running under comparable
conditions (drop rate, RTT, packet size, etc.). This is
described in the remainder of the document.
2. Non-responsive flows
A non-responsive flow does not adjust its rate in response to
congestion notification within a small number of path RTTs; it
can also use more capacity than a conformant TCP running under
comparable conditions. There is a growing set of applications
whose congestion avoidance algorithms are inadequate or
nonexistent (i.e., a flow that does not throttle its sending rate
when it experiences congestion).
The User Datagram Protocol (UDP) [RFC768] provides a minimal,
best-effort transport to applications and upper-layer protocols
(both simply called "applications" in the remainder of this
document) and does not itself provide mechanisms to prevent
congestion collapse or establish a degree of fairness [RFC5405].
Examples that use UDP include some streaming applications for
packet voice and video, and some multicast bulk data transport.
Other traffic, when aggregated, may also become unresponsive to
congestion notification. If no action is taken, such
unresponsive flows could lead to a new congestion collapse
[RFC2914]. Some applications can even increase their traffic
volume in response to congestion (e.g., by adding Forward Error
Correction when loss is experienced), with the possibility that
they contribute to congestion collapse.
In general, applications need to incorporate effective congestion
avoidance mechanisms [RFC5405]. Research continues to be needed
to identify and develop ways to accomplish congestion avoidance
for presently unresponsive applications. Network devices need to
be able to protect themselves against unresponsive flows, and
mechanisms to accomplish this must be developed and deployed.
Deployment of such mechanisms would provide an incentive for all
applications to become responsive by either using a congestion-
controlled transport (e.g., TCP, SCTP [RFC4960], and DCCP
[RFC4340]) or incorporating their own congestion control in the
application [RFC5405] [RFC6679].
3. Transport flows that are less responsive than TCP
A second threat is posed by transport protocol implementations
that are responsive to congestion, but, either deliberately or
through faulty implementation, reduce the effective window less
than a TCP flow would have done in response to congestion. This
covers a spectrum of behaviors between (1) and (2). If
applications are not sufficiently responsive to congestion
signals, they may gain an unfair share of the available network
capacity.
For example, the popularity of the Internet has caused a
proliferation in the number of TCP implementations. Some of
these may fail to implement the TCP congestion avoidance
mechanisms correctly because of poor implementation. Others may
deliberately be implemented with congestion avoidance algorithms
that are more aggressive in their use of capacity than other TCP
implementations; this would allow a vendor to claim to have a
"faster TCP". The logical consequence of such implementations
would be a spiral of increasingly aggressive TCP implementations,
leading back to the point where there is effectively no
congestion avoidance and the Internet is chronically congested.
Another example could be an RTP/UDP video flow that uses an
adaptive codec, but responds incompletely to indications of
congestion or responds over an excessively long time period.
Such flows are unlikely to be responsive to congestion signals in
a time frame comparable to a small number of end-to-end
transmission delays. However, over a longer timescale, perhaps
seconds in duration, they could moderate their speed, or increase
their speed if they determine capacity to be available.
Tunneled traffic aggregates carrying multiple (short) TCP flows
can be more aggressive than standard bulk TCP. Applications
(e.g., web browsers primarily supporting HTTP 1.1 and peer-to-
peer file-sharing) have exploited this by opening multiple
connections to the same endpoint.
Lastly, some applications (e.g., web browsers primarily
supporting HTTP 1.1) open a large numbers of successive short TCP
flows for a single session. This can lead to each individual
flow spending the majority of time in the exponential TCP slow
start phase, rather than in TCP congestion avoidance. The
resulting traffic aggregate can therefore be much less responsive
than a single standard TCP flow.
The projected increase in the fraction of total Internet traffic for
more aggressive flows in classes 2 and 3 could pose a threat to the
performance of the future Internet. There is therefore an urgent
need for measurements of current conditions and for further research
into the ways of managing such flows. This raises many difficult
issues in finding methods with an acceptable overhead cost that can
identify and isolate unresponsive flows or flows that are less
responsive than TCP. Finally, there is as yet little measurement or
simulation evidence available about the rate at which these threats
are likely to be realized or about the expected benefit of algorithms
for managing such flows.
Another topic requiring consideration is the appropriate granularity
of a "flow" when considering a queue management method. There are a
few "natural" answers: 1) a transport (e.g., TCP or UDP) flow (source
address/port, destination address/port, protocol); 2) Differentiated
Services Code Point, DSCP; 3) a source/destination host pair (IP
address); 4) a given source host or a given destination host, or
various combinations of the above; 5) a subscriber or site receiving
the Internet service (enterprise or residential).
The source/destination host pair gives an appropriate granularity in
many circumstances. However, different vendors/providers use
different granularities for defining a flow (as a way of
"distinguishing" themselves from one another), and different
granularities may be chosen for different places in the network. It
may be the case that the granularity is less important than the fact
that a network device needs to be able to deal with more unresponsive
flows at *some* granularity. The granularity of flows for congestion
management is, at least in part, a question of policy that needs to
be addressed in the wider IETF community.
4. Conclusions and Recommendations
The IRTF, in producing [RFC2309], and the IETF in subsequent
discussion, have developed a set of specific recommendations
regarding the implementation and operational use of AQM procedures.
The recommendations provided by this document are summarized as:
1. Network devices SHOULD implement some AQM mechanism to manage
queue lengths, reduce end-to-end latency, and avoid lock-out
phenomena within the Internet.
2. Deployed AQM algorithms SHOULD support Explicit Congestion
Notification (ECN) as well as loss to signal congestion to
endpoints.
3. AQM algorithms SHOULD NOT require tuning of initial or
configuration parameters in common use cases.
4. AQM algorithms SHOULD respond to measured congestion, not
application profiles.
5. AQM algorithms SHOULD NOT interpret specific transport protocol
behaviors.
6. Congestion control algorithms for transport protocols SHOULD
maximize their use of available capacity (when there is data to
send) without incurring undue loss or undue round-trip delay.
7. Research, engineering, and measurement efforts are needed
regarding the design of mechanisms to deal with flows that are
unresponsive to congestion notification or are responsive, but
are more aggressive than present TCP.
These recommendations are expressed using the word "SHOULD". This is
in recognition that there may be use cases that have not been
envisaged in this document in which the recommendation does not
apply. Therefore, care should be taken in concluding that one's use
case falls in that category; during the life of the Internet, such
use cases have been rarely, if ever, observed and reported. To the
contrary, available research [Choi04] says that even high-speed links
in network cores that are normally very stable in depth and behavior
experience occasional issues that need moderation. The
recommendations are detailed in the following sections.
4.1. Operational Deployments SHOULD Use AQM Procedures
AQM procedures are designed to minimize the delay and buffer
exhaustion induced in the network by queues that have filled as a
result of host behavior. Marking and loss behaviors provide a signal
that buffers within network devices are becoming unnecessarily full
and that the sender would do well to moderate its behavior.
The use of scheduling mechanisms, such as priority queueing, classful
queueing, and fair queueing, is often effective in networks to help a
network serve the needs of a range of applications. Network
operators can use these methods to manage traffic passing a choke
point. This is discussed in [RFC2474] and [RFC2475]. When
scheduling is used, AQM should be applied across the classes or flows
as well as within each class or flow:
o AQM mechanisms need to control the overall queue sizes to ensure
that arriving bursts can be accommodated without dropping packets.
o AQM mechanisms need to allow combination with other mechanisms,
such as scheduling, to allow implementation of policies for
providing fairness between different flows.
o AQM should be used to control the queue size for each individual
flow or class, so that they do not experience unnecessarily high
delay.
4.2. Signaling to the Transport Endpoints
There are a number of ways a network device may signal to the
endpoint that the network is becoming congested and trigger a
reduction in rate. The signaling methods include:
o Delaying transport segments (packets) in flight, such as in a
queue.
o Dropping transport segments (packets) in transit.
o Marking transport segments (packets), such as using Explicit
Congestion Control [RFC3168] [RFC4301] [RFC4774] [RFC6040]
[RFC6679].
Increased network latency is used as an implicit signal of
congestion. For example, in TCP, additional delay can affect ACK
clocking and has the result of reducing the rate of transmission of
new data. In the Real-time Transport Protocol (RTP), network latency
impacts the RTCP-reported RTT, and increased latency can trigger a
sender to adjust its rate. Methods such as Low Extra Delay
Background Transport (LEDBAT) [RFC6817] assume increased latency as a
primary signal of congestion. Appropriate use of delay-based methods
and the implications of AQM presently remain an area for further
research.
It is essential that all Internet hosts respond to loss [RFC5681]
[RFC5405] [RFC4960] [RFC4340]. Packet dropping by network devices
that are under load has two effects: It protects the network, which
is the primary reason that network devices drop packets. The
detection of loss also provides a signal to a reliable transport
(e.g., TCP, SCTP) that there is incipient congestion, using a
pragmatic but ambiguous heuristic. Whereas, when the network
discards a message in flight, the loss may imply the presence of
faulty equipment or media in a path, or it may imply the presence of
congestion. To be conservative, a transport must assume it may be
the latter. Applications using unreliable transports (e.g., using
UDP) need to similarly react to loss [RFC5405].
Network devices SHOULD use an AQM algorithm to measure local
congestion and to determine the packets to mark or drop so that the
congestion is managed.
In general, dropping multiple packets from the same sessions in the
same RTT is ineffective and can reduce throughput. Also, dropping or
marking packets from multiple sessions simultaneously can have the
effect of synchronizing them, resulting in increasing peaks and
troughs in the subsequent traffic load. Hence, AQM algorithms SHOULD
randomize dropping in time, to reduce the probability that congestion
indications are only experienced by a small proportion of the active
flows.
Loss due to dropping also has an effect on the efficiency of a flow
and can significantly impact some classes of application. In
reliable transports, the dropped data must be subsequently
retransmitted. While other applications/transports may adapt to the
absence of lost data, this still implies inefficient use of available
capacity, and the dropped traffic can affect other flows. Hence,
congestion signaling by loss is not entirely positive; it is a
necessary evil.
4.2.1. AQM and ECN
Explicit Congestion Notification (ECN) [RFC4301] [RFC4774] [RFC6040]
[RFC6679] is a network-layer function that allows a transport to
receive network congestion information from a network device without
incurring the unintended consequences of loss. ECN includes both
transport mechanisms and functions implemented in network devices;
the latter rely upon using AQM to decide when and whether to ECN-
mark.
Congestion for ECN-capable transports is signaled by a network device
setting the "Congestion Experienced (CE)" codepoint in the IP header.
This codepoint is noted by the remote receiving endpoint and signaled
back to the sender using a transport protocol mechanism, allowing the
sender to trigger timely congestion control. The decision to set the
CE codepoint requires an AQM algorithm configured with a threshold.
Non-ECN capable flows (the default) are dropped under congestion.
Network devices SHOULD use an AQM algorithm that marks ECN-capable
traffic when making decisions about the response to congestion.
Network devices need to implement this method by marking ECN-capable
traffic or by dropping non-ECN-capable traffic.
Safe deployment of ECN requires that network devices drop excessive
traffic, even when marked as originating from an ECN-capable
transport. This is a necessary safety precaution because:
1. A non-conformant, broken, or malicious receiver could conceal an
ECN mark and not report this to the sender;
2. A non-conformant, broken, or malicious sender could ignore a
reported ECN mark, as it could ignore a loss without using ECN;
3. A malfunctioning or non-conforming network device may "hide" an
ECN mark (or fail to correctly set the ECN codepoint at an egress
of a network tunnel).
In normal operation, such cases should be very uncommon; however,
overload protection is desirable to protect traffic from
misconfigured or malicious use of ECN (e.g., a denial-of-service
attack that generates ECN-capable traffic that is unresponsive to CE-
marking).
When ECN is added to a scheme, the ECN support MAY define a separate
set of parameters from those used for controlling packet drop. The
AQM algorithm SHOULD still auto-tune these ECN-specific parameters.
These parameters SHOULD also be manually configurable.
Network devices SHOULD use an algorithm to drop excessive traffic
(e.g., at some level above the threshold for CE-marking), even when
the packets are marked as originating from an ECN-capable transport.
4.3. AQM Algorithm Deployment SHOULD NOT Require Operational Tuning
A number of AQM algorithms have been proposed. Many require some
form of tuning or setting of parameters for initial network
conditions. This can make these algorithms difficult to use in
operational networks.
AQM algorithms need to consider both "initial conditions" and
"operational conditions". The former includes values that exist
before any experience is gathered about the use of the algorithm,
such as the configured speed of interface, support for full-duplex
communication, interface MTU, and other properties of the link.
Other properties include information observed from monitoring the
size of the queue, the queueing delay experienced, rate of packet
discard, etc.
This document therefore specifies that AQM algorithms that are
proposed for deployment in the Internet have the following
properties:
o AQM algorithm deployment SHOULD NOT require tuning. An algorithm
MUST provide a default behavior that auto-tunes to a reasonable
performance for typical network operational conditions. This is
expected to ease deployment and operation. Initial conditions,
such as the interface rate and MTU size or other values derived
from these, MAY be required by an AQM algorithm.
o AQM algorithm deployment MAY support further manual tuning that
could improve performance in a specific deployed network.
Algorithms that lack such variables are acceptable, but, if such
variables exist, they SHOULD be externalized (made visible to the
operator). The specification should identify any cases in which
auto-tuning is unlikely to achieve acceptable performance and give
guidance on the parametric adjustments necessary. For example,
the expected response of an algorithm may need to be configured to
accommodate the largest expected Path RTT, since this value cannot
be known at initialization. This guidance is expected to enable
the algorithm to be deployed in networks that have specific
characteristics (paths with variable or larger delay, networks
where capacity is impacted by interactions with lower-layer
mechanisms, etc).
o AQM algorithm deployment MAY provide logging and alarm signals to
assist in identifying if an algorithm using manual or auto-tuning
is functioning as expected. (For example, this could be based on
an internal consistency check between input, output, and mark/drop
rates over time.) This is expected to encourage deployment by
default and allow operators to identify potential interactions
with other network functions.
Hence, self-tuning algorithms are to be preferred. Algorithms
recommended for general Internet deployment by the IETF need to be
designed so that they do not require operational (especially manual)
configuration or tuning.
4.4. AQM Algorithms SHOULD Respond to Measured Congestion, Not
Application Profiles
Not all applications transmit packets of the same size. Although
applications may be characterized by particular profiles of packet
size, this should not be used as the basis for AQM (see Section 4.5).
Other methods exist, e.g., Differentiated Services queueing, Pre-
Congestion Notification (PCN) [RFC5559], that can be used to
differentiate and police classes of application. Network devices may
combine AQM with these traffic classification mechanisms and perform
AQM only on specific queues within a network device.
An AQM algorithm should not deliberately try to prejudice the size of
packet that performs best (i.e., preferentially drop/mark based only
on packet size). Procedures for selecting packets to drop/mark
SHOULD observe the actual or projected time that a packet is in a
queue (bytes at a rate being an analog to time). When an AQM
algorithm decides whether to drop (or mark) a packet, it is
RECOMMENDED that the size of the particular packet not be taken into
account [RFC7141].
Applications (or transports) generally know the packet size that they
are using and can hence make their judgments about whether to use
small or large packets based on the data they wish to send and the
expected impact on the delay, throughput, or other performance
parameter. When a transport or application responds to a dropped or
marked packet, the size of the rate reduction should be proportionate
to the size of the packet that was sent [RFC7141].
An AQM-enabled system MAY instantiate different instances of an AQM
algorithm to be applied within the same traffic class. Traffic
classes may be differentiated based on an Access Control List (ACL),
the packet DSCP [RFC5559], enabling use of the ECN field (i.e., any
of ECT(0), ECT(1) or CE) [RFC3168] [RFC4774], a multi-field (MF)
classifier that combines the values of a set of protocol fields
(e.g., IP address, transport, ports), or an equivalent codepoint at a
lower layer. This recommendation goes beyond what is defined in RFC
3168 by allowing that an implementation MAY use more than one
instance of an AQM algorithm to handle both ECN-capable and non-ECN-
capable packets.
4.5. AQM Algorithms SHOULD NOT Be Dependent on Specific Transport
Protocol Behaviors
In deploying AQM, network devices need to support a range of Internet
traffic and SHOULD NOT make implicit assumptions about the
characteristics desired by the set of transports/applications the
network supports. That is, AQM methods should be opaque to the
choice of transport and application.
AQM algorithms are often evaluated by considering TCP [RFC793] with a
limited number of applications. Although TCP is the predominant
transport in the Internet today, this no longer represents a
sufficient selection of traffic for verification. There is
significant use of UDP [RFC768] in voice and video services, and some
applications find utility in SCTP [RFC4960] and DCCP [RFC4340].
Hence, AQM algorithms should demonstrate operation with transports
other than TCP and need to consider a variety of applications. When
selecting AQM algorithms, the use of tunnel encapsulations that may
carry traffic aggregates needs to be considered.
AQM algorithms SHOULD NOT target or derive implicit assumptions about
the characteristics desired by specific transports/applications.
Transports and applications need to respond to the congestion signals
provided by AQM (i.e., dropping or ECN-marking) in a timely manner
(within a few RTTs at the latest).
4.6. Interactions with Congestion Control Algorithms
Applications and transports need to react to received implicit or
explicit signals that indicate the presence of congestion. This
section identifies issues that can impact the design of transport
protocols when using paths that use AQM.
Transport protocols and applications need timely signals of
congestion. The time taken to detect and respond to congestion is
increased when network devices queue packets in buffers. It can be
difficult to detect tail losses at a higher layer, and this may
sometimes require transport timers or probe packets to detect and
respond to such loss. Loss patterns may also impact timely
detection, e.g., the time may be reduced when network devices do not
drop long runs of packets from the same flow.
A common objective of an elastic transport congestion control
protocol is to allow an application to deliver the maximum rate of
data without inducing excessive delays when packets are queued in
buffers within the network. To achieve this, a transport should try
to operate at rate below the inflection point of the load/delay curve
(the bend of what is sometimes called a "hockey stick" curve)
[Jain94]. When the congestion window allows the load to approach
this bend, the end-to-end delay starts to rise -- a result of
congestion, as packets probabilistically arrive at non-overlapping
times. On the one hand, a transport that operates above this point
can experience congestion loss and could also trigger operator
activities, such as those discussed in [RFC6057]. On the other hand,
a flow may achieve both near-maximum throughput and low latency when
it operates close to this knee point, with minimal contribution to
router congestion. Choice of an appropriate rate/congestion window
can therefore significantly impact the loss and delay experienced by
a flow and will impact other flows that share a common network queue.
Some applications may send data at a lower rate or keep less segments
outstanding at any given time. Examples include multimedia codecs
that stream at some natural rate (or set of rates) or an application
that is naturally interactive (e.g., some web applications,
interactive server-based gaming, transaction-based protocols). Such
applications may have different objectives. They may not wish to
maximize throughput, but may desire a lower loss rate or bounded
delay.
The correct operation of an AQM-enabled network device MUST NOT rely
upon specific transport responses to congestion signals.
4.7. The Need for Further Research
The second recommendation of [RFC2309] called for further research
into the interaction between network queues and host applications,
and the means of signaling between them. This research has occurred,
and we as a community have learned a lot. However, we are not done.
We have learned that the problems of congestion, latency, and buffer-
sizing have not gone away and are becoming more important to many
users. A number of self-tuning AQM algorithms have been found that
offer significant advantages for deployed networks. There is also
renewed interest in deploying AQM and the potential of ECN.
Traffic patterns can depend on the network deployment scenario, and
Internet research therefore needs to consider the implications of a
diverse range of application interactions. This includes ensuring
that combinations of mechanisms, as well as combinations of traffic
patterns, do not interact and result in either significantly reduced
flow throughput or significantly increased latency.
At the time of writing (in 2015), an obvious example of further
research is the need to consider the many-to-one communication
patterns found in data centers, known as incast [Ren12], (e.g.,
produced by Map/Reduce applications). Such analysis needs to study
not only each application traffic type but also combinations of types
of traffic.
Research also needs to consider the need to extend our taxonomy of
transport sessions to include not only "mice" and "elephants", but
"lemmings". Here, "lemmings" are flash crowds of "mice" that the
network inadvertently tries to signal to as if they were "elephant"
flows, resulting in head-of-line blocking in a data center deployment
scenario.
Examples of other required research include:
o new AQM and scheduling algorithms
o appropriate use of delay-based methods and the implications of AQM
o suitable algorithms for marking ECN-capable packets that do not
require operational configuration or tuning for common use
o experience in the deployment of ECN alongside AQM
o tools for enabling AQM (and ECN) deployment and measuring the
performance
o methods for mitigating the impact of non-conformant and malicious
flows
o implications on applications of using new network and transport
methods
Hence, this document reiterates the call of RFC 2309: we need
continuing research as applications develop.
5. Security Considerations
While security is a very important issue, it is largely orthogonal to
the performance issues discussed in this memo.
This recommendation requires algorithms to be independent of specific
transport or application behaviors. Therefore, a network device does
not require visibility or access to upper-layer protocol information
to implement an AQM algorithm. This ability to operate in an
application-agnostic fashion is an example of a privacy-enhancing
feature.
Many deployed network devices use queueing methods that allow
unresponsive traffic to capture network capacity, denying access to
other traffic flows. This could potentially be used as a denial-of-
service attack. This threat could be reduced in network devices that
deploy AQM or some form of scheduling. We note, however, that a
denial-of-service attack that results in unresponsive traffic flows
may be indistinguishable from other traffic flows (e.g., tunnels
carrying aggregates of short flows, high-rate isochronous
applications). New methods therefore may remain vulnerable, and this
document recommends that ongoing research consider ways to mitigate
such attacks.
6. Privacy Considerations
This document, by itself, presents no new privacy issues.
7. References
7.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<http://www.rfc-editor.org/info/rfc2119>.
[RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition
of Explicit Congestion Notification (ECN) to IP",
RFC 3168, DOI 10.17487/RFC3168, September 2001,
<http://www.rfc-editor.org/info/rfc3168>.
[RFC4301] Kent, S. and K. Seo, "Security Architecture for the
Internet Protocol", RFC 4301, DOI 10.17487/RFC4301,
December 2005, <http://www.rfc-editor.org/info/rfc4301>.
[Dem90] Demers, A., Keshav, S., and S. Shenker, "Analysis and
Simulation of a Fair Queueing Algorithm, Internetworking:
Research and Experience", SIGCOMM Symposium proceedings on
Communications architectures and protocols, 1990.
[ECN-Benefit]
Fairhurst, G. and M. Welzl, "The Benefits of using
Explicit Congestion Notification (ECN)", Work in Progress,
draft-ietf-aqm-ecn-benefits-05, June 2015.
[Flo92] Floyd, S. and V. Jacobsen, "On Traffic Phase Effects in
Packet-Switched Gateways", 1992,
<http://www.icir.org/floyd/papers/phase.pdf>.
[Flo94] Floyd, S. and V. Jacobsen, "The Synchronization of
Periodic Routing Messages", 1994,
<http://ee.lbl.gov/papers/sync_94.pdf>.
[Floyd91] Floyd, S., "Connections with Multiple Congested Gateways
in Packet-Switched Networks Part 1: One-way Traffic.",
Computer Communications Review , October 1991.
[Floyd95] Floyd, S. and V. Jacobson, "Link-sharing and Resource
Management Models for Packet Networks", IEEE/ACM
Transactions on Networking, August 1995.
[Jacobson88]
Jacobson, V., "Congestion Avoidance and Control", SIGCOMM
Symposium proceedings on Communications architectures and
protocols, August 1988.
[Jain94] Jain, R., Ramakrishnan, KK., and C. Dah-Ming, "Congestion
avoidance scheme for computer networks", US Patent Office
5377327, December 1994.
[Lakshman96]
Lakshman, TV., Neidhardt, A., and T. Ott, "The Drop From
Front Strategy in TCP Over ATM and Its Interworking with
Other Control Features", IEEE Infocomm, 1996.
[Leland94] Leland, W., Taqqu, M., Willinger, W., and D. Wilson, "On
the Self-Similar Nature of Ethernet Traffic (Extended
Version)", IEEE/ACM Transactions on Networking, February
1994.
[RFC6057] Bastian, C., Klieber, T., Livingood, J., Mills, J., and R.
Woundy, "Comcast's Protocol-Agnostic Congestion Management
System", RFC 6057, DOI 10.17487/RFC6057, December 2010,
<http://www.rfc-editor.org/info/rfc6057>.
[RFC6789] Briscoe, B., Ed., Woundy, R., Ed., and A. Cooper, Ed.,
"Congestion Exposure (ConEx) Concepts and Use Cases",
RFC 6789, DOI 10.17487/RFC6789, December 2012,
<http://www.rfc-editor.org/info/rfc6789>.
[RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind,
"Low Extra Delay Background Transport (LEDBAT)", RFC 6817,
DOI 10.17487/RFC6817, December 2012,
<http://www.rfc-editor.org/info/rfc6817>.
[RFC7414] Duke, M., Braden, R., Eddy, W., Blanton, E., and A.
Zimmermann, "A Roadmap for Transmission Control Protocol
(TCP) Specification Documents", RFC 7414,
DOI 10.17487/RFC7414, February 2015,
<http://www.rfc-editor.org/info/rfc7414>.
[Shr96] Shreedhar, M. and G. Varghese, "Efficient Fair Queueing
Using Deficit Round Robin", IEEE/ACM Transactions on
Networking, Vol. 4, No. 3, July 1996.
[Sto97] Stoica, I. and H. Zhang, "A Hierarchical Fair Service
Curve algorithm for Link sharing, real-time and priority
services", ACM SIGCOMM, 1997.
[Sut99] Suter, B., "Buffer Management Schemes for Supporting TCP
in Gigabit Routers with Per-flow Queueing", IEEE Journal
on Selected Areas in Communications, Vol. 17, Issue 6, pp.
1159-1169, June 1999.
[Willinger95]
Willinger, W., Taqqu, M., Sherman, R., Wilson, D., and V.
Jacobson, "Self-Similarity Through High-Variability:
Statistical Analysis of Ethernet LAN Traffic at the Source
Level", SIGCOMM Symposium proceedings on Communications
architectures and protocols, August 1995.
[Zha90] Zhang, L. and D. Clark, "Oscillating Behavior of Network
Traffic: A Case Study Simulation", 1990,
<http://groups.csail.mit.edu/ana/Publications/Zhang-DDC-
Oscillating-Behavior-of-Network-Traffic-1990.pdf>.
Acknowledgements
The original draft of this document describing best current practice
was based on [RFC2309], an Informational RFC. It was written by the
End-to-End Research Group, which is to say Bob Braden, Dave Clark,
Jon Crowcroft, Bruce Davie, Steve Deering, Deborah Estrin, Sally
Floyd, Van Jacobson, Greg Minshall, Craig Partridge, Larry Peterson,
KK Ramakrishnan, Scott Shenker, John Wroclawski, and Lixia Zhang.
Although there are important differences, many of the key arguments
in the present document remain unchanged from those in RFC 2309.
The need for an updated document was agreed to in the TSV area
meeting at IETF 86. This document was reviewed on the aqm@ietf.org
list. Comments were received from Colin Perkins, Richard
Scheffenegger, Dave Taht, John Leslie, David Collier-Brown, and many
others.
Gorry Fairhurst was in part supported by the European Community under
its Seventh Framework Programme through the Reducing Internet
Transport Latency (RITE) project (ICT-317700).
Authors' Addresses
Fred Baker (editor)
Cisco Systems
Santa Barbara, California 93117
United States
Email: fred@cisco.com
Godred Fairhurst (editor)
University of Aberdeen
School of Engineering
Fraser Noble Building
Aberdeen, Scotland AB24 3UE
United Kingdom
Email: gorry@erg.abdn.ac.uk
URI: http://www.erg.abdn.ac.uk