tech-invite   World Map     

3GPP     Specs     Glossaries     Architecture     IMS     UICC       IETF     RFCs     Groups     SIP     ABNFs       Search

RFC 8085

BCP 145
Pages: 55
Top     in Index     Prev     Next
in Group Index     Prev in Group     Next in Group     Group: TSVWG

UDP Usage Guidelines

Part 1 of 3, p. 1 to 20
None       Next Section

Obsoletes:    5405

Top       ToC       Page 1 
Internet Engineering Task Force (IETF)                         L. Eggert
Request for Comments: 8085                                        NetApp
BCP: 145                                                    G. Fairhurst
Obsoletes: 5405                                   University of Aberdeen
Category: Best Current Practice                              G. Shepherd
ISSN: 2070-1721                                            Cisco Systems
                                                              March 2017

                          UDP Usage Guidelines


   The User Datagram Protocol (UDP) provides a minimal message-passing
   transport that has no inherent congestion control mechanisms.  This
   document provides guidelines on the use of UDP for the designers of
   applications, tunnels, and other protocols that use UDP.  Congestion
   control guidelines are a primary focus, but the document also
   provides guidance on other topics, including message sizes,
   reliability, checksums, middlebox traversal, the use of Explicit
   Congestion Notification (ECN), Differentiated Services Code Points
   (DSCPs), and ports.

   Because congestion control is critical to the stable operation of the
   Internet, applications and other protocols that choose to use UDP as
   an Internet transport must employ mechanisms to prevent congestion
   collapse and to establish some degree of fairness with concurrent
   traffic.  They may also need to implement additional mechanisms,
   depending on how they use UDP.

   Some guidance is also applicable to the design of other protocols
   (e.g., protocols layered directly on IP or via IP-based tunnels),
   especially when these protocols do not themselves provide congestion

   This document obsoletes RFC 5405 and adds guidelines for multicast
   UDP usage.

[Page 2] 
Status of This Memo

   This memo documents an Internet Best Current Practice.

   This document is a product of the Internet Engineering Task Force
   (IETF).  It represents the consensus of the IETF community.  It has
   received public review and has been approved for publication by the
   Internet Engineering Steering Group (IESG).  Further information on
   BCPs is available in Section 2 of RFC 7841.

   Information about the current status of this document, any errata,
   and how to provide feedback on it may be obtained at

Copyright Notice

   Copyright (c) 2017 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   ( in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Top       Page 3 
Table of Contents

   1. Introduction ....................................................3
   2. Terminology .....................................................5
   3. UDP Usage Guidelines ............................................5
      3.1. Congestion Control Guidelines ..............................6
      3.2. Message Size Guidelines ...................................19
      3.3. Reliability Guidelines ....................................21
      3.4. Checksum Guidelines .......................................22
      3.5. Middlebox Traversal Guidelines ............................25
      3.6. Limited Applicability and Controlled Environments .........27
   4. Multicast UDP Usage Guidelines .................................28
      4.1. Multicast Congestion Control Guidelines ...................30
      4.2. Message Size Guidelines for Multicast .....................32
   5. Programming Guidelines .........................................32
      5.1. Using UDP Ports ...........................................34
      5.2. ICMP Guidelines ...........................................37
   6. Security Considerations ........................................38
   7. Summary ........................................................40
   8. References .....................................................42
      8.1. Normative References ......................................42
      8.2. Informative References ....................................43
   Appendix A. .......................................................53
   Acknowledgments ...................................................55
   Authors' Addresses ................................................55

1.  Introduction

   The User Datagram Protocol (UDP) [RFC768] provides a minimal,
   unreliable, best-effort, message-passing transport to applications
   and other protocols (such as tunnels) that wish to operate over IP.
   Both are simply called "applications" in the remainder of this

   Compared to other transport protocols, UDP and its UDP-Lite variant
   [RFC3828] are unique in that they do not establish end-to-end
   connections between communicating end systems.  UDP communication
   consequently does not incur connection establishment and teardown
   overheads, and there is minimal associated end-system state.  Because
   of these characteristics, UDP can offer a very efficient
   communication transport to some applications.

   A second unique characteristic of UDP is that it provides no inherent
   congestion control mechanisms.  On many platforms, applications can
   send UDP datagrams at the line rate of the platform's link interface,
   which is often much greater than the available end-to-end path
   capacity, and doing so contributes to congestion along the path.
   [RFC2914] describes the best current practice for congestion control

Top      ToC       Page 4 
   in the Internet.  It identifies two major reasons why congestion
   control mechanisms are critical for the stable operation of the

   1.  The prevention of congestion collapse, i.e., a state where an
       increase in network load results in a decrease in useful work
       done by the network.

   2.  The establishment of a degree of fairness, i.e., allowing
       multiple flows to share the capacity of a path reasonably

   Because UDP itself provides no congestion control mechanisms, it is
   up to the applications that use UDP for Internet communication to
   employ suitable mechanisms to prevent congestion collapse and
   establish a degree of fairness.  [RFC2309] discusses the dangers of
   congestion-unresponsive flows and states that "all UDP-based
   streaming applications should incorporate effective congestion
   avoidance mechanisms."  [RFC7567] reaffirms this statement.  This is
   an important requirement, even for applications that do not use UDP
   for streaming.  In addition, congestion-controlled transmission is of
   benefit to an application itself, because it can reduce self-induced
   packet loss, minimize retransmissions, and hence reduce delays.
   Congestion control is essential even at relatively slow transmission
   rates.  For example, an application that generates five 1500-byte UDP
   datagrams in one second can already exceed the capacity of a 56 Kb/s
   path.  For applications that can operate at higher, potentially
   unbounded data rates, congestion control becomes vital to prevent
   congestion collapse and establish some degree of fairness.  Section 3
   describes a number of simple guidelines for the designers of such

   A UDP datagram is carried in a single IP packet and is hence limited
   to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for
   IPv6.  The transmission of large IP packets usually requires IP
   fragmentation.  Fragmentation decreases communication reliability and
   efficiency and should be avoided.  IPv6 allows the option of
   transmitting large packets ("jumbograms") without fragmentation when
   all link layers along the path support this [RFC2675].  Some of the
   guidelines in Section 3 describe how applications should determine
   appropriate message sizes.  Other sections of this document provide
   guidance on reliability, checksums, middlebox traversal and use of

   This document provides guidelines and recommendations.  Although most
   UDP applications are expected to follow these guidelines, there do
   exist valid reasons why a specific application may decide not to
   follow a given guideline.  In such cases, it is RECOMMENDED that

Top      ToC       Page 5 
   application designers cite the respective section(s) of this document
   in the technical specification of their application or protocol and
   explain their rationale for their design choice.

   [RFC5405] was scoped to provide guidelines for unicast applications
   only, whereas this document also provides guidelines for UDP flows
   that use IP anycast, multicast, broadcast, and applications that use
   UDP tunnels to support IP flows.

   Finally, although this document specifically refers to usage of UDP,
   the spirit of some of its guidelines also applies to other message-
   passing applications and protocols (specifically on the topics of
   congestion control, message sizes, and reliability).  Examples
   include signaling, tunnel or control applications that choose to run
   directly over IP by registering their own IP protocol number with
   IANA.  This document is expected to provide useful background reading
   to the designers of such applications and protocols.

2.  Terminology

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "OPTIONAL" in this document are to be interpreted as described in

3.  UDP Usage Guidelines

   Internet paths can have widely varying characteristics, including
   transmission delays, available bandwidths, congestion levels,
   reordering probabilities, supported message sizes, or loss rates.
   Furthermore, the same Internet path can have very different
   conditions over time.  Consequently, applications that may be used on
   the Internet MUST NOT make assumptions about specific path
   characteristics.  They MUST instead use mechanisms that let them
   operate safely under very different path conditions.  Typically, this
   requires conservatively probing the current conditions of the
   Internet path they communicate over to establish a transmission
   behavior that it can sustain and that is reasonably fair to other
   traffic sharing the path.

   These mechanisms are difficult to implement correctly.  For most
   applications, the use of one of the existing IETF transport protocols
   is the simplest method of acquiring the required mechanisms.  Doing
   so also avoids issues that protocols using a new IP protocol number
   face when being deployed over the Internet, where middleboxes that
   only support TCP and UDP are sometimes present.  Consequently, the
   RECOMMENDED alternative to the UDP usage described in the remainder
   of this section is the use of an IETF transport protocol such as TCP

Top      ToC       Page 6 
   [RFC793], Stream Control Transmission Protocol (SCTP) [RFC4960], and
   SCTP Partial Reliability Extension (SCTP-PR) [RFC3758], or Datagram
   Congestion Control Protocol (DCCP) [RFC4340] with its different
   congestion control types [RFC4341][RFC4342][RFC5622], or transport
   protocols specified by the IETF in the future.  (UDP-encapsulated
   SCTP [RFC6951] and DCCP [RFC6773] can offer support for traversing
   firewalls and other middleboxes where the native protocols are not

   If used correctly, these more fully featured transport protocols are
   not as "heavyweight" as often claimed.  For example, the TCP
   algorithms have been continuously improved over decades, and they
   have reached a level of efficiency and correctness that custom
   application-layer mechanisms will struggle to easily duplicate.  In
   addition, many TCP implementations allow connections to be tuned by
   an application to its purposes.  For example, TCP's "Nagle" algorithm
   [RFC1122] can be disabled, improving communication latency at the
   expense of more frequent -- but still congestion controlled -- packet
   transmissions.  Another example is the TCP SYN cookie mechanism
   [RFC4987], which is available on many platforms.  TCP with SYN
   cookies does not require a server to maintain per-connection state
   until the connection is established.  TCP also requires the end that
   closes a connection to maintain the TIME-WAIT state that prevents
   delayed segments from one connection instance from interfering with a
   later one.  Applications that are aware of and designed for this
   behavior can shift maintenance of the TIME-WAIT state to conserve
   resources by controlling which end closes a TCP connection [FABER].
   Finally, TCP's built-in capacity-probing and awareness of the maximum
   transmission unit supported by the path (PMTU) results in efficient
   data transmission that quickly compensates for the initial connection
   setup delay, in the case of transfers that exchange more than a few

3.1.  Congestion Control Guidelines

   If an application or protocol chooses not to use a congestion-
   controlled transport protocol, it SHOULD control the rate at which it
   sends UDP datagrams to a destination host, in order to fulfill the
   requirements of [RFC2914].  It is important to stress that an
   application SHOULD perform congestion control over all UDP traffic it
   sends to a destination, independently from how it generates this
   traffic.  For example, an application that forks multiple worker
   processes or otherwise uses multiple sockets to generate UDP
   datagrams SHOULD perform congestion control over the aggregate

Top      ToC       Page 7 
   Several approaches to perform congestion control are discussed in the
   remainder of this section.  This section describes generic topics
   with an intended emphasis on unicast and anycast [RFC1546] usage.
   Not all approaches discussed below are appropriate for all UDP-
   transmitting applications.  Section 3.1.2 discusses congestion
   control options for applications that perform bulk transfers over
   UDP.  Such applications can employ schemes that sample the path over
   several subsequent round-trips during which data is exchanged to
   determine a sending rate that the path at its current load can
   support.  Other applications only exchange a few UDP datagrams with a
   destination.  Section 3.1.3 discusses congestion control options for
   such "low data-volume" applications.  Because they typically do not
   transmit enough data to iteratively sample the path to determine a
   safe sending rate, they need to employ different kinds of congestion
   control mechanisms.  Section 3.1.11 discusses congestion control
   considerations when UDP is used as a tunneling protocol.  Section 4
   provides additional recommendations for broadcast and multicast

   It is important to note that congestion control should not be viewed
   as an add-on to a finished application.  Many of the mechanisms
   discussed in the guidelines below require application support to
   operate correctly.  Application designers need to consider congestion
   control throughout the design of their application, similar to how
   they consider security aspects throughout the design process.

   In the past, the IETF has also investigated integrated congestion
   control mechanisms that act on the traffic aggregate between two
   hosts, i.e., a framework such as the Congestion Manager [RFC3124],
   where active sessions may share current congestion information in a
   way that is independent of the transport protocol.  Such mechanisms
   have currently failed to see deployment, but would otherwise simplify
   the design of congestion control mechanisms for UDP sessions, so that
   they fulfill the requirements in [RFC2914].

3.1.1.  Protocol Timer Guidelines

   Understanding the latency between communicating endpoints is usually
   a crucial part of effective congestion control implementations for
   protocols and applications.  Latency estimation can be used in a
   number of protocol functions, such as calculating a congestion-
   controlled transmission rate, triggering retransmission, and
   detecting packet loss.  Additional protocol functions, for example,
   determining an interval for probing a path, determining an interval
   between keep-alive messages, determining an interval for measuring
   the quality of experience, or determining if a remote endpoint has

Top      ToC       Page 8 
   responded to a request to perform an action, typically operate over
   longer timescales than congestion control and therefore are not
   covered in this section.

   The general recommendation in this document is that applications
   SHOULD leverage existing congestion control techniques and the
   latency estimators specified therein (see next subsection).  The
   following guidelines are provided for applications that need to
   design their own latency estimation mechanisms.

   The guidelines are framed in terms of "latency" and not "round-trip
   time" because some situations require characterizing only the
   network-based latency (e.g., TCP-Friendly Rate Control (TFRC)
   [RFC5348]), while other cases necessitate inclusion of the time
   required by the remote endpoint to provide feedback (e.g., developing
   an understanding of when to retransmit a message).

   The latency between endpoints is generally a dynamic property.
   Therefore, estimates SHOULD represent some sort of averaging of
   multiple recent measurement samples to account for variance.
   Leveraging an Exponentially Weighted Moving Average (EWMA) has proven
   useful for this purpose (e.g., in TCP [RFC6298] and TFRC [RFC5348]).

   Independent latency estimates SHOULD be maintained for each
   destination with which an endpoint communicates.

   Latency samples MUST NOT be derived from ambiguous transactions.  The
   canonical example is in a protocol that retransmits data, but
   subsequently cannot determine which copy is being acknowledged.  This
   ambiguity makes correct computation of the latency problematic.  See
   the discussion of Karn's algorithm in [RFC6298].  This requirement
   ensures a sender establishes a sound estimate of the latency without
   relying on misleading measurements.

   When a latency estimate is used to arm a timer that provides loss
   detection -- with or without retransmission -- expiry of the timer
   MUST be interpreted as an indication of congestion in the network,
   causing the sending rate to be adapted to a safe conservative rate
   (e.g., TCP collapses the congestion window to one segment [RFC5681]).

   Some applications require an initial latency estimate before the
   latency between endpoints can be empirically sampled.  For instance,
   when arming a retransmission timer, an initial value is needed to
   protect the messages sent before the endpoints sample the latency.
   This initial latency estimate SHOULD generally be as conservative
   (large) as possible for the given application.  For instance, in the
   absence of any knowledge about the latency of a path, TCP requires
   the initial Retransmission Timeout (RTO) to be set to no less than 1

Top      ToC       Page 9 
   second [RFC6298].  UDP applications SHOULD similarly use an initial
   latency estimate of 1 second.  Values shorter than 1 second can be
   problematic (see the data analysis in the appendix of [RFC6298]).

3.1.2.  Bulk-Transfer Applications

   Applications that perform bulk transmission of data to a peer over
   UDP, i.e., applications that exchange more than a few UDP datagrams
   per RTT, SHOULD implement TFRC [RFC5348], window-based TCP-like
   congestion control, or otherwise ensure that the application complies
   with the congestion control principles.

   TFRC has been designed to provide both congestion control and
   fairness in a way that is compatible with the IETF's other transport
   protocols.  If an application implements TFRC, it need not follow the
   remaining guidelines in Section 3.1.2, because TFRC already addresses
   them, but it SHOULD still follow the remaining guidelines in the
   subsequent subsections of Section 3.

   Bulk-transfer applications that choose not to implement TFRC or TCP-
   like windowing SHOULD implement a congestion control scheme that
   results in bandwidth (capacity) use that competes fairly with TCP
   within an order of magnitude.

   Section 2 of [RFC3551] suggests that applications SHOULD monitor the
   packet-loss rate to ensure that it is within acceptable parameters.
   Packet loss is considered acceptable if a TCP flow across the same
   network path under the same network conditions would achieve an
   average throughput, measured on a reasonable timescale, that is not
   less than that of the UDP flow.  The comparison to TCP cannot be
   specified exactly, but is intended as an "order-of-magnitude"
   comparison in timescale and throughput.  The recommendations for
   managing timers specified in Section 3.1.1 also apply.

   Finally, some bulk-transfer applications may choose not to implement
   any congestion control mechanism and instead rely on transmitting
   across reserved path capacity (see Section 3.1.9).  This might be an
   acceptable choice for a subset of restricted networking environments,
   but is by no means a safe practice for operation over the wider
   Internet.  When the UDP traffic of such applications leaks out into
   unprovisioned Internet paths, it can significantly degrade the
   performance of other traffic sharing the path and even result in
   congestion collapse.  Applications that support an uncontrolled or
   unadaptive transmission behavior SHOULD NOT do so by default and
   SHOULD instead require users to explicitly enable this mode of
   operation, and they SHOULD verify that sufficient path capacity has
   been reserved for them.

Top      ToC       Page 10 
3.1.3.  Low Data-Volume Applications

   When applications that at any time exchange only a few UDP datagrams
   with a destination implement TFRC or one of the other congestion
   control schemes in Section 3.1.2, the network sees little benefit,
   because those mechanisms perform congestion control in a way that is
   only effective for longer transmissions.

   Applications that at any time exchange only a few UDP datagrams with
   a destination SHOULD still control their transmission behavior by not
   sending on average more than one UDP datagram per RTT to a
   destination.  Similar to the recommendation in [RFC1536], an
   application SHOULD maintain an estimate of the RTT for any
   destination with which it communicates using the methods specified in
   Section 3.1.1.

   Some applications cannot maintain a reliable RTT estimate for a
   destination.  These applications do not need to or are unable to use
   protocol timers to measure the RTT (Section 3.1.1).  Two cases can be

   1.  The first case is that of applications that exchange too few UDP
       datagrams with a peer to establish a statistically accurate RTT
       estimate but that can monitor the reliability of transmission
       (Section 3.3).  Such applications MAY use a predetermined
       transmission interval that is exponentially backed off when
       packets are deemed lost.  TCP specifies an initial value of 1
       second [RFC6298], which is also RECOMMENDED as an initial value
       for UDP applications.  Some low data-volume applications, e.g.,
       SIP [RFC3261] and General Internet Signaling Transport (GIST)
       [RFC5971] use an interval of 500 ms, and shorter values are
       likely problematic in many cases.  As in the previous case, note
       that the initial timeout is not the maximum possible timeout, see
       Section 3.1.1.

   2.  A second case of applications cannot maintain an RTT estimate for
       a destination, because the destination does not send return
       traffic.  Such applications SHOULD NOT send more than one UDP
       datagram every 3 seconds and SHOULD use an even less aggressive
       rate when possible.  Shorter values are likely problematic in
       many cases.  Note that the sending rate in this case must be more
       conservative than in the previous cases, because the lack of
       return traffic prevents the detection of packet loss, i.e.,
       congestion, and the application therefore cannot perform
       exponential back off to reduce load.

Top      ToC       Page 11 
3.1.4.  Applications Supporting Bidirectional Communications

   Applications that communicate bidirectionally SHOULD employ
   congestion control for both directions of the communication.  For
   example, for a client-server, request-response-style application,
   clients SHOULD congestion-control their request transmission to a
   server, and the server SHOULD congestion-control its responses to the
   clients.  Congestion in the forward and reverse directions is
   uncorrelated, and an application SHOULD either independently detect
   and respond to congestion along both directions or limit new and
   retransmitted requests based on acknowledged responses across the
   entire round-trip path.

3.1.5.  Implications of RTT and Loss Measurements on Congestion Control

   Transports such as TCP, SCTP, and DCCP provide timely detection of
   congestion that results in an immediate reduction of their maximum
   sending rate when congestion is experienced.  This reaction is
   typically completed 1-2 RTTs after loss/congestion is encountered.
   Applications using UDP SHOULD implement a congestion control scheme
   that provides a prompt reaction to signals indicating congestion
   (e.g., by reducing the rate within the next RTT following a
   congestion signal).

   The operation of a UDP congestion control algorithm can be very
   different from the way TCP operates.  This includes congestion
   controls that respond on timescales that fit applications that cannot
   usefully work within the "change rate every RTT" model of TCP.
   Applications that experience a low or varying RTT are particularly
   vulnerable to sampling errors (e.g., due to measurement noise or
   timer accuracy).  This suggests the need to average loss/congestion
   and RTT measurements over a longer interval; however, this also can
   contribute additional delay in detecting congestion.  Some
   applications may not react by reducing their sending rate immediately
   for various reasons, including the following: RTT and loss
   measurements are only made periodically (e.g., using RTCP),
   additional time is required to filter information, or the application
   is only able to change its sending rate at predetermined interval
   (e.g., some video codecs).

   When designing a congestion control algorithm, the designer therefore
   needs to consider the total time taken to reduce the load following a
   lack of feedback or a congestion event.  An application where the
   most recent RTT measurement is smaller than the actual RTT or the
   measured loss rate is smaller than the current rate, can result in
   over estimating the available capacity.  Such over-estimation can

Top      ToC       Page 12 
   result in a sending rate that creates congestion to the application
   or other flows sharing the path capacity, and can contribute to
   congestion collapse -- both of these need to be avoided.

   A congestion control designed for UDP SHOULD respond as quickly as
   possible when it experiences congestion, and it SHOULD take into
   account both the loss rate and the response time when choosing a new
   rate.  The implemented congestion control scheme SHOULD result in
   bandwidth (capacity) use that is comparable to that of TCP within an
   order of magnitude, so that it does not starve other flows sharing a
   common bottleneck.

3.1.6.  Burst Mitigation and Pacing

   UDP applications SHOULD provide mechanisms to regulate the bursts of
   transmission that the application may send to the network.  Many TCP
   and SCTP implementations provide mechanisms that prevent a sender
   from generating long bursts at line-rate, since these are known to
   induce early loss to applications sharing a common network
   bottleneck.  The use of pacing with TCP [ALLMAN] has also been shown
   to improve the coexistence of TCP flows with other flows.  The need
   to avoid excessive transmission bursts is also noted in
   specifications for applications (e.g., [RFC7143]).

   Even low data-volume UDP flows may benefit from packet pacing, e.g.,
   an application that sends three copies of a packet to improve
   robustness to loss is RECOMMENDED to pace out those three packets
   over several RTTs, to reduce the probability that all three packets
   will be lost due to the same congestion event (or other event, such
   as burst corruption).

3.1.7.  Explicit Congestion Notification

   Internet applications can use Explicit Congestion Notification (ECN)
   [RFC3168] to gain benefits for the services they support [RFC8087].

   Internet transports, such as TCP, provide a set of mechanisms that
   are needed to utilize ECN.  ECN operates by setting an ECN-capable
   codepoint (ECT(0) or ECT(1)) in the IP header of packets that are
   sent.  This indicates to ECN-capable network devices (routers and
   other devices) that they may mark (set the congestion experienced,
   Congestion Experience (CE) codepoint) rather than drop the IP packet
   as a signal of incipient congestion.

   UDP applications can also benefit from enabling ECN, providing that
   the API supports ECN and that they implement the required protocol
   mechanisms to support ECN.

Top      ToC       Page 13 
   The set of mechanisms required for an application to use ECN over UDP

   o  A sender MUST provide a method to determine (e.g., negotiate) that
      the corresponding application is able to provide ECN feedback
      using a compatible ECN method.

   o  A receiver that enables the use of ECN for a UDP port MUST check
      the ECN field at the receiver for each UDP datagram that it
      receives on this port.

   o  The receiving application needs to provide feedback of congestion
      information to the sending application.  This MUST report the
      presence of datagrams received with a CE-mark by providing a
      mechanism to feed this congestion information back to the sending
      application.  The feedback MAY also report the presence of ECT(1)
      and ECT(0)/Not-ECT packets [RFC7560].  ([RFC3168] and [RFC7560]
      specify methods for TCP.)

   o  An application sending ECN-capable datagrams MUST provide an
      appropriate congestion reaction when it receives feedback
      indicating that congestion has been experienced.  This ought to
      result in reduction of the sending rate by the UDP congestion
      control method (see Section 3.1) that is not less than the
      reaction of TCP under equivalent conditions.

   o  A sender SHOULD detect network paths that do not support the ECN
      field correctly.  When detected, they need to either
      conservatively react to congestion or even fall back to not using
      ECN [RFC8087].  This method needs to be robust to changes within
      the network path that may occur over the lifetime of a session.

   o  A sender is encouraged to provide a mechanism to detect and react
      appropriately to misbehaving receivers that fail to report
      CE-marked packets [RFC8087].

   [RFC6679] provides guidance and an example of this support, by
   describing a method to allow ECN to be used for UDP-based
   applications using the Real-Time Protocol (RTP).  Applications that
   cannot provide this set of mechanisms, but wish to gain the benefits
   of using ECN, are encouraged to use a transport protocol that already
   supports ECN (such as TCP).

3.1.8.  Differentiated Services Model

   An application using UDP can use the differentiated services
   (DiffServ) Quality of Service (QoS) framework.  To enable
   differentiated services processing, a UDP sender sets the

Top      ToC       Page 14 
   Differentiated Services Code Point (DSCP) field [RFC2475] in packets
   sent to the network.  Normally, a UDP source/destination port pair
   will set a single DSCP value for all packets belonging to a flow, but
   multiple DSCPs can be used as described later in this section.  A
   DSCP may be chosen from a small set of fixed values (the class
   selector code points), or from a set of recommended values defined in
   the Per Hop Behavior (PHB) specifications, or from values that have
   purely local meanings to a specific network that supports DiffServ.
   In general, packets may be forwarded across multiple networks between
   source and destination.

   In setting a non-default DSCP value, an application must be aware
   that DSCP markings may be changed or removed between the traffic
   source and destination.  This has implications on the design of
   applications that use DSCPs.  Specifically, applications SHOULD be
   designed not to rely on implementation of a specific network
   treatment; they need instead to implement congestion control methods
   to determine if their current sending rate is inducing congestion in
   the network.

   [RFC7657] describes the implications of using DSCPs and provides
   recommendations on using multiple DSCPs within a single network five-
   tuple (source and destination addresses, source and destination
   ports, and the transport protocol used, in this case, UDP or
   UDP-Lite), and particularly the expected impact on transport protocol
   interactions, with congestion control or reliability functionality
   (e.g., retransmission, reordering).  Use of multiple DSCPs can result
   in reordering by increasing the set of network forwarding resources
   used by a sender.  It can also increase exposure to resource
   depletion or failure.

3.1.9.  QoS, Pre-Provisioned, or Reserved Capacity

   The IETF usually specifies protocols for use within the Best Effort
   General Internet.  Sometimes it is relevant to specify protocols with
   a different applicability.  An application using UDP can use the
   integrated services QoS framework.  This framework is usually made
   available within controlled environments (e.g., within a single
   administrative domain or bilaterally agreed connection between
   domains).  Applications intended for the Internet SHOULD NOT assume
   that QoS mechanisms are supported by the networks they use, and
   therefore need to provide congestion control, error recovery, etc.,
   in case the actual network path does not provide provisioned service.

   Some UDP applications are only expected to be deployed over network
   paths that use pre-provisioned capacity or capacity reserved using
   dynamic provisioning, e.g., through the Resource Reservation Protocol
   (RSVP).  Multicast applications are also used with pre-provisioned

Top      ToC       Page 15 
   capacity (e.g., IPTV deployments within access networks).  These
   applications MAY choose not to implement any congestion control
   mechanism and instead rely on transmitting only on paths where the
   capacity is provisioned and reserved for this use.  This might be an
   acceptable choice for a subset of restricted networking environments,
   but is by no means a safe practice for operation over the wider
   Internet.  Applications that choose this option SHOULD carefully and
   in detail describe the provisioning and management procedures that
   result in the desired containment.

   Applications that support an uncontrolled or unadaptive transmission
   behavior SHOULD NOT do so by default and SHOULD instead require users
   to explicitly enable this mode of operation.

   Applications designed for use within a controlled environment (see
   Section 3.6) may be able to exploit network management functions to
   detect whether they are causing congestion, and react accordingly.
   If the traffic of such applications leaks out into unprovisioned
   Internet paths, it can significantly degrade the performance of other
   traffic sharing the path and even result in congestion collapse.
   Protocols designed for such networks SHOULD provide mechanisms at the
   network edge to prevent leakage of traffic into unprovisioned
   Internet paths (e.g., [RFC7510]).  To protect other applications
   sharing the same path, applications SHOULD also deploy an appropriate
   circuit breaker, as described in Section 3.1.10.

   An IETF specification targeting a controlled environment is expected
   to provide an applicability statement that restricts the application
   to the controlled environment (see Section 3.6).

3.1.10.  Circuit Breaker Mechanisms

   A transport circuit breaker is an automatic mechanism that is used to
   estimate the congestion caused by a flow, and to terminate (or
   significantly reduce the rate of) the flow when excessive congestion
   is detected [RFC8084].  This is a safety measure to prevent
   congestion collapse (starvation of resources available to other
   flows), essential for an Internet that is heterogeneous and for
   traffic that is hard to predict in advance.

   A circuit breaker is intended as a protection mechanism of last
   resort.  Under normal circumstances, a circuit breaker should not be
   triggered; it is designed to protect things when there is severe
   overload.  The goal is usually to limit the maximum transmission rate
   that reflects the available capacity of a network path.  Circuit
   breakers can operate on individual UDP flows or traffic aggregates,
   e.g., traffic sent using a network tunnel.

Top      ToC       Page 16 
   [RFC8084] provides guidance and examples on the use of circuit
   breakers.  The use of a circuit breaker in RTP is specified in

   Applications used in the general Internet SHOULD implement a
   transport circuit breaker if they do not implement congestion control
   or operate a low data-volume service (see Section 3.6).  All
   applications MAY implement a transport circuit breaker [RFC8084] and
   are encouraged to consider implementing at least a slow-acting
   transport circuit breaker to provide a protection of last resort for
   their network traffic.

3.1.11.  UDP Tunnels

   One increasingly popular use of UDP is as a tunneling protocol
   [INT-TUNNELS], where a tunnel endpoint encapsulates the packets of
   another protocol inside UDP datagrams and transmits them to another
   tunnel endpoint, which decapsulates the UDP datagrams and forwards
   the original packets contained in the payload.  One example of such a
   protocol is Teredo [RFC4380].  Tunnels establish virtual links that
   appear to directly connect locations that are distant in the physical
   Internet topology and can be used to create virtual (private)
   networks.  Using UDP as a tunneling protocol is attractive when the
   payload protocol is not supported by middleboxes that may exist along
   the path, because many middleboxes support transmission using UDP.

   Well-implemented tunnels are generally invisible to the endpoints
   that happen to transmit over a path that includes tunneled links.  On
   the other hand, to the routers along the path of a UDP tunnel, i.e.,
   the routers between the two tunnel endpoints, the traffic that a UDP
   tunnel generates is a regular UDP flow, and the encapsulator and
   decapsulator appear as regular UDP-sending and UDP-receiving
   applications.  Because other flows can share the path with one or
   more UDP tunnels, congestion control needs to be considered.

   Two factors determine whether a UDP tunnel needs to employ specific
   congestion control mechanisms: first, whether the payload traffic is
   IP-based; and second, whether the tunneling scheme generates UDP
   traffic at a volume that corresponds to the volume of payload traffic
   carried within the tunnel.

   IP-based unicast traffic is generally assumed to be congestion
   controlled, i.e., it is assumed that the transport protocols
   generating IP-based unicast traffic at the sender already employ
   mechanisms that are sufficient to address congestion on the path.
   Consequently, a tunnel carrying IP-based unicast traffic should

Top      ToC       Page 17 
   already interact appropriately with other traffic sharing the path,
   and specific congestion control mechanisms for the tunnel are not

   However, if the IP traffic in the tunnel is known not to be
   congestion controlled, additional measures are RECOMMENDED to limit
   the impact of the tunneled traffic on other traffic sharing the path.
   For the specific case of a tunnel that carries IP multicast traffic,
   see Section 4.1.

   The following guidelines define these possible cases in more detail:

   1.  A tunnel generates UDP traffic at a volume that corresponds to
       the volume of payload traffic, and the payload traffic is IP
       based and congestion controlled.

       This is arguably the most common case for Internet tunnels.  In
       this case, the UDP tunnel SHOULD NOT employ its own congestion
       control mechanism, because congestion losses of tunneled traffic
       will already trigger an appropriate congestion response at the
       original senders of the tunneled traffic.  A circuit breaker
       mechanism may provide benefit by controlling the envelope of the
       aggregated traffic.

       Note that this guideline is built on the assumption that most
       IP-based communication is congestion controlled.  If a UDP tunnel
       is used for IP-based traffic that is known to not be congestion
       controlled, the next set of guidelines applies.

   2.  A tunnel generates UDP traffic at a volume that corresponds to
       the volume of payload traffic, and the payload traffic is not
       known to be IP based, or is known to be IP based but not
       congestion controlled.

       This can be the case, for example, when some link-layer protocols
       are encapsulated within UDP (but not all link-layer protocols;
       some are congestion controlled).  Because it is not known that
       congestion losses of tunneled non-IP traffic will trigger an
       appropriate congestion response at the senders, the UDP tunnel
       SHOULD employ an appropriate congestion control mechanism or
       circuit breaker mechanism designed for the traffic it carries.
       Because tunnels are usually bulk-transfer applications as far as
       the intermediate routers are concerned, the guidelines in
       Section 3.1.2 apply.

   3.  A tunnel generates UDP traffic at a volume that does not
       correspond to the volume of payload traffic, independent of
       whether the payload traffic is IP based or congestion controlled.

Top      ToC       Page 18 
       Examples of this class include UDP tunnels that send at a
       constant rate, increase their transmission rates under loss, for
       example, due to increasing redundancy when Forward Error
       Correction is used, or are otherwise unconstrained in their
       transmission behavior.  These specialized uses of UDP for
       tunneling go beyond the scope of the general guidelines given in
       this document.  The implementer of such specialized tunnels
       SHOULD carefully consider congestion control in the design of
       their tunneling mechanism and SHOULD consider use of a circuit
       breaker mechanism.

   The type of encapsulated payload might be identified by a UDP port;
   identified by an Ethernet Type or IP protocol number.  A tunnel
   SHOULD provide mechanisms to restrict the types of flows that may be
   carried by the tunnel.  For instance, a UDP tunnel designed to carry
   IP needs to filter out non-IP traffic at the ingress.  This is
   particularly important when a generic tunnel encapsulation is used
   (e.g., one that encapsulates using an EtherType value).  Such tunnels
   SHOULD provide a mechanism to restrict the types of traffic that are
   allowed to be encapsulated for a given deployment (see

   Designing a tunneling mechanism requires significantly more expertise
   than needed for many other UDP applications, because tunnels are
   usually intended to be transparent to the endpoints transmitting over
   them, so they need to correctly emulate the behavior of an IP link
   [INT-TUNNELS], for example:

   o  Requirements for tunnels that carry or encapsulate using ECN code
      points [RFC6040].

   o  Usage of the IP DSCP field by tunnel endpoints [RFC2983].

   o  Encapsulation considerations in the design of tunnels [ENCAP].

   o  Usage of ICMP messages [INT-TUNNELS].

   o  Handling of fragmentation and packet size for tunnels

   o  Source port usage for tunnels designed to support equal cost
      multipath (ECMP) routing (see Section 5.1.1).

   o  Guidance on the need to protect headers [INT-TUNNELS] and the use
      of checksums for IPv6 tunnels (see Section 3.4.1).

   o  Support for operations and maintenance [INT-TUNNELS].

Top      ToC       Page 19 
   At the same time, the tunneled traffic is application traffic like
   any other from the perspective of the networks the tunnel transmits
   over.  This document only touches upon the congestion control
   considerations for implementing UDP tunnels; a discussion of other
   required tunneling behavior is out of scope.

3.2.  Message Size Guidelines

   IP fragmentation lowers the efficiency and reliability of Internet
   communication.  The loss of a single fragment results in the loss of
   an entire fragmented packet, because even if all other fragments are
   received correctly, the original packet cannot be reassembled and
   delivered.  This fundamental issue with fragmentation exists for both
   IPv4 and IPv6.

   In addition, some network address translators (NATs) and firewalls
   drop IP fragments.  The network address translation performed by a
   NAT only operates on complete IP packets, and some firewall policies
   also require inspection of complete IP packets.  Even with these
   being the case, some NATs and firewalls simply do not implement the
   necessary reassembly functionality; instead, they choose to drop all
   fragments.  Finally, [RFC4963] documents other issues specific to
   IPv4 fragmentation.

   Due to these issues, an application SHOULD NOT send UDP datagrams
   that result in IP packets that exceed the Maximum Transmission Unit
   (MTU) along the path to the destination.  Consequently, an
   application SHOULD either use the path MTU information provided by
   the IP layer or implement Path MTU Discovery (PMTUD) itself [RFC1191]
   [RFC1981] [RFC4821] to determine whether the path to a destination
   will support its desired message size without fragmentation.

   However, the ICMP messages that enable path MTU discovery are being
   increasingly filtered by middleboxes (including Firewalls) [RFC4890].
   When the path includes a tunnel, some devices acting as a tunnel
   ingress discard ICMP messages that originate from network devices
   over which the tunnel passes, preventing these from reaching the UDP

   Packetization Layer Path MTU Discovery (PLPMTUD) [RFC4821] does not
   rely upon network support for ICMP messages and is therefore
   considered more robust than standard PMTUD.  It is not susceptible to
   "black holing" of ICMP messages.  To operate, PLPMTUD requires
   changes to the way the transport is used: both to transmit probe
   packets and to account for the loss or success of these probes.  This
   not only updates the PMTU algorithm, it also impacts loss recovery,
   congestion control, etc.  These updated mechanisms can be implemented

Top      ToC       Page 20 
   within a connection-oriented transport (e.g., TCP, SCTP, DCCP), but
   they are not a part of UDP; this type of feedback is not typically
   present for unidirectional applications.

   Therefore, PLPMTUD places additional design requirements on a UDP
   application that wishes to use this method.  This is especially true
   for UDP tunnels, because the overhead of sending probe packets needs
   to be accounted for and may require adding a congestion control
   mechanism to the tunnel (see Section 3.1.11) as well as complicating
   the data path at a tunnel decapsulator.

   Applications that do not follow the recommendation to do PMTU/PLPMTUD
   discovery SHOULD still avoid sending UDP datagrams that would result
   in IP packets that exceed the path MTU.  Because the actual path MTU
   is unknown, such applications SHOULD fall back to sending messages
   that are shorter than the default effective MTU for sending (EMTU_S
   in [RFC1122]).  For IPv4, EMTU_S is the smaller of 576 bytes and the
   first-hop MTU [RFC1122].  For IPv6, EMTU_S is 1280 bytes [RFC2460].
   The effective PMTU for a directly connected destination (with no
   routers on the path) is the configured interface MTU, which could be
   less than the maximum link payload size.  Transmission of minimum-
   sized UDP datagrams is inefficient over paths that support a larger
   PMTU, which is a second reason to implement PMTU discovery.

   To determine an appropriate UDP payload size, applications MUST
   subtract the size of the IP header (which includes any IPv4 optional
   headers or IPv6 extension headers) as well as the length of the UDP
   header (8 bytes) from the PMTU size.  This size, known as the Maximum
   Segment Size (MSS), can be obtained from the TCP/IP stack [RFC1122].

   Applications that do not send messages that exceed the effective PMTU
   of IPv4 or IPv6 need not implement any of the above mechanisms.  Note
   that the presence of tunnels can cause an additional reduction of the
   effective PMTU [INT-TUNNELS], so implementing PMTU discovery may be

   Applications that fragment an application-layer message into multiple
   UDP datagrams SHOULD perform this fragmentation so that each datagram
   can be received independently, and be independently retransmitted in
   the case where an application implements its own reliability

Next Section