tech-invite   World Map
3GPP     Specs     Glossaries     UICC       T+       IETF     RFCs     Groups     SIP     ABNFs       Search

RFC 8085

 
 
 

UDP Usage Guidelines

Part 2 of 3, p. 21 to 40
Prev Section       Next Section

 


prevText      Top      ToC       Page 21 
3.3.  Reliability Guidelines

   Application designers are generally aware that UDP does not provide
   any reliability, e.g., it does not retransmit any lost packets.
   Often, this is a main reason to consider UDP as a transport protocol.
   Applications that do require reliable message delivery MUST implement
   an appropriate mechanism themselves.

   UDP also does not protect against datagram duplication, i.e., an
   application may receive multiple copies of the same UDP datagram,
   with some duplicates arriving potentially much later than the first.
   Application designers SHOULD handle such datagram duplication
   gracefully, and they may consequently need to implement mechanisms to
   detect duplicates.  Even if UDP datagram reception triggers only
   idempotent operations, applications may want to suppress duplicate
   datagrams to reduce load.

   Applications that require ordered delivery MUST reestablish datagram
   ordering themselves.  The Internet can significantly delay some
   packets with respect to others, e.g., due to routing transients,
   intermittent connectivity, or mobility.  This can cause reordering,
   where UDP datagrams arrive at the receiver in an order different from
   the transmission order.

   Applications that use multiple transport ports need to be robust to
   reordering between sessions.  Load-balancing techniques within the
   network, such as Equal Cost Multipath (ECMP) forwarding can also
   result in a lack of ordering between different transport sessions,
   even between the same two network endpoints.

   It is important to note that the time by which packets are reordered
   or after which duplicates can still arrive can be very large.  Even
   more importantly, there is no well-defined upper boundary here.
   [RFC793] defines the maximum delay a TCP segment should experience --
   the Maximum Segment Lifetime (MSL) -- as 2 minutes.  No other RFC
   defines an MSL for other transport protocols or IP itself.  The MSL
   value defined for TCP is conservative enough that it SHOULD be used
   by other protocols, including UDP.  Therefore, applications SHOULD be
   robust to the reception of delayed or duplicate packets that are
   received within this 2-minute interval.

   Retransmission of lost packets or messages is a common reliability
   mechanism.  Such retransmissions can increase network load in
   response to congestion, worsening that congestion.  Any application
   that uses retransmission is responsible for congestion control of its
   retransmissions (as well as the application's original traffic);
   hence, it is subject to the Congestion Control guidelines in

Top      Up      ToC       Page 22 
   Section 3.1.  Guidance on the appropriate measurement of RTT in
   Section 3.1.1 also applies for timers used for retransmission packet-
   loss detection.

   Instead of implementing these relatively complex reliability
   mechanisms by itself, an application that requires reliable and
   ordered message delivery SHOULD whenever possible choose an IETF
   standard transport protocol that provides these features.

3.4.  Checksum Guidelines

   The UDP header includes an optional, 16-bit one's complement checksum
   that provides an integrity check.  These checks are not strong from a
   coding or cryptographic perspective and are not designed to detect
   physical-layer errors or malicious modification of the datagram
   [RFC3819].  Application developers SHOULD implement additional checks
   where data integrity is important, e.g., through a Cyclic Redundancy
   Check (CRC) or keyed or non-keyed cryptographic hash included with
   the data to verify the integrity of an entire object/file sent over
   the UDP service.

   The UDP checksum provides a statistical guarantee that the payload
   was not corrupted in transit.  It also allows the receiver to verify
   that it was the intended destination of the packet, because it covers
   the IP addresses, port numbers, and protocol number, and it verifies
   that the packet is not truncated or padded, because it covers the
   size field.  Therefore, it protects an application against receiving
   corrupted payload data in place of, or in addition to, the data that
   was sent.  More description of the set of checks performed using the
   checksum field is provided in Section 3.1 of [RFC6396].

   Applications SHOULD enable UDP checksums [RFC1122].  For IPv4,
   [RFC768] permits an option to disable their use, by setting a zero
   checksum value.  An application is permitted to optionally discard
   UDP datagrams with a zero checksum [RFC1122].

   When UDP is used over IPv6, the UDP checksum is relied upon to
   protect both the IPv6 and UDP headers from corruption (because IPv6
   lacks a checksum) and MUST be used as specified in [RFC2460].  Under
   specific conditions, a UDP application is allowed to use a zero UDP
   zero-checksum mode with a tunnel protocol (see Section 3.4.1).

   Applications that choose to disable UDP checksums MUST NOT make
   assumptions regarding the correctness of received data and MUST
   behave correctly when a UDP datagram is received that was originally
   sent to a different destination or is otherwise corrupted.

Top      Up      ToC       Page 23 
3.4.1.  IPv6 Zero UDP Checksum

   [RFC6935] defines a method that enables use of a zero UDP zero-
   checksum mode with a tunnel protocol, providing that the method
   satisfies the requirements in [RFC6936].  The application MUST
   implement mechanisms and/or usage restrictions when enabling this
   mode.  This includes defining the scope for usage and measures to
   prevent leakage of traffic to other UDP applications (see Appendix A
   and Section 3.6).  These additional design requirements for using a
   zero IPv6 UDP checksum are not present for IPv4, since the IPv4
   header validates information that is not protected in an IPv6 packet.
   Key requirements are:

   o  Use of the UDP checksum with IPv6 MUST be the default
      configuration for all implementations [RFC6935].  The receiving
      endpoint MUST only allow the use of UDP zero-checksum mode for
      IPv6 on a UDP destination port that is specifically enabled.

   o  An application that supports a checksum different than that in
      [RFC2460] MUST comply with all implementation requirements
      specified in Section 4 of [RFC6936] and with the usage
      requirements specified in Section 5 of [RFC6936].

   o  A UDP application MUST check that the source and destination IPv6
      addresses are valid for any packets with a UDP zero-checksum and
      MUST discard any packet for which this check fails.  To protect
      from misdelivery, new encapsulation designs SHOULD include an
      integrity check at the transport layer that includes at least the
      IPv6 header, the UDP header and the shim header for the
      encapsulation, if any [RFC6936].

   o  One way to help satisfy the requirements of [RFC6936] may be to
      limit the usage of such tunnels, e.g., to constrain traffic to an
      operator network, as discussed in Section 3.6.  The encapsulation
      defined for MPLS in UDP [RFC7510] chooses this approach.

   As in IPv4, IPv6 applications that choose to disable UDP checksums
   MUST NOT make assumptions regarding the correctness of received data
   and MUST behave correctly when a UDP datagram is received that was
   originally sent to a different destination or is otherwise corrupted.

   IPv6 datagrams with a zero UDP checksum will not be passed by any
   middlebox that validates the checksum based on [RFC2460] or that
   updates the UDP checksum field, such as NATs or firewalls.  Changing
   this behavior would require such middleboxes to be updated to
   correctly handle datagrams with zero UDP checksums.  To ensure end-
   to-end robustness, applications that may be deployed in the general
   Internet MUST provide a mechanism to safely fall back to using a

Top      Up      ToC       Page 24 
   checksum when a path change occurs that redirects a zero UDP checksum
   flow over a path that includes a middlebox that discards IPv6
   datagrams with a zero UDP checksum.

3.4.2.  UDP-Lite

   A special class of applications can derive benefit from having
   partially damaged payloads delivered, rather than discarded, when
   using paths that include error-prone links.  Such applications can
   tolerate payload corruption and MAY choose to use the Lightweight
   User Datagram Protocol (UDP-Lite) [RFC3828] variant of UDP instead of
   basic UDP.  Applications that choose to use UDP-Lite instead of UDP
   should still follow the congestion control and other guidelines
   described for use with UDP in Section 3.

   UDP-Lite changes the semantics of the UDP "payload length" field to
   that of a "checksum coverage length" field.  Otherwise, UDP-Lite is
   semantically identical to UDP.  The interface of UDP-Lite differs
   from that of UDP by the addition of a single (socket) option that
   communicates the checksum coverage length: at the sender, this
   specifies the intended checksum coverage, with the remaining
   unprotected part of the payload called the "error-insensitive part".
   By default, the UDP-Lite checksum coverage extends across the entire
   datagram.  If required, an application may dynamically modify this
   length value, e.g., to offer greater protection to some messages.
   UDP-Lite always verifies that a packet was delivered to the intended
   destination, i.e., always verifies the header fields.  Errors in the
   insensitive part will not cause a UDP datagram to be discarded by the
   destination.  Therefore, applications using UDP-Lite MUST NOT make
   assumptions regarding the correctness of the data received in the
   insensitive part of the UDP-Lite payload.

   A UDP-Lite sender SHOULD select the minimum checksum coverage to
   include all sensitive payload information.  For example, applications
   that use the Real-Time Protocol (RTP) [RFC3550] will likely want to
   protect the RTP header against corruption.  Applications, where
   appropriate, MUST also introduce their own appropriate validity
   checks for protocol information carried in the insensitive part of
   the UDP-Lite payload (e.g., internal CRCs).

   A UDP-Lite receiver MUST set a minimum coverage threshold for
   incoming packets that is not smaller than the smallest coverage used
   by the sender [RFC3828].  The receiver SHOULD select a threshold that
   is sufficiently large to block packets with an inappropriately short
   coverage field.  This may be a fixed value, or it may be negotiated
   by an application.  UDP-Lite does not provide mechanisms to negotiate
   the checksum coverage between the sender and receiver.  Therefore,
   this needs to be performed by the application.

Top      Up      ToC       Page 25 
   Applications can still experience packet loss when using UDP-Lite.
   The enhancements offered by UDP-Lite rely upon a link being able to
   intercept the UDP-Lite header to correctly identify the partial
   coverage required.  When tunnels and/or encryption are used, this can
   result in UDP-Lite datagrams being treated the same as UDP datagrams,
   i.e., result in packet loss.  Use of IP fragmentation can also
   prevent special treatment for UDP-Lite datagrams, and this is another
   reason why applications SHOULD avoid IP fragmentation (Section 3.2).

   UDP-Lite is supported in some endpoint protocol stacks.  Current
   support for middlebox traversal using UDP-Lite is poor, because UDP-
   Lite uses a different IPv4 protocol number or IPv6 "next header"
   value than that used for UDP; therefore, few middleboxes are
   currently able to interpret UDP-Lite and take appropriate actions
   when forwarding the packet.  This makes UDP-Lite less suited for
   applications needing general Internet support, until such time as
   UDP-Lite has achieved better support in middleboxes.

3.5.  Middlebox Traversal Guidelines

   NATs and firewalls are examples of intermediary devices
   ("middleboxes") that can exist along an end-to-end path.  A middlebox
   typically performs a function that requires it to maintain per-flow
   state.  For connection-oriented protocols, such as TCP, middleboxes
   snoop and parse the connection-management information, and create and
   destroy per-flow state accordingly.  For a connectionless protocol
   such as UDP, this approach is not possible.  Consequently,
   middleboxes can create per-flow state when they see a packet that --
   according to some local criteria -- indicates a new flow, and destroy
   the state after some time during which no packets belonging to the
   same flow have arrived.

   Depending on the specific function that the middlebox performs, this
   behavior can introduce a time-dependency that restricts the kinds of
   UDP traffic exchanges that will be successful across the middlebox.
   For example, NATs and firewalls typically define the partial path on
   one side of them to be interior to the domain they serve, whereas the
   partial path on their other side is defined to be exterior to that
   domain.  Per-flow state is typically created when the first packet
   crosses from the interior to the exterior, and while the state is
   present, NATs and firewalls will forward return traffic.  Return
   traffic that arrives after the per-flow state has timed out is
   dropped, as is other traffic that arrives from the exterior.

Top      Up      ToC       Page 26 
   Many applications that use UDP for communication operate across
   middleboxes without needing to employ additional mechanisms.  One
   example is the Domain Name System (DNS), which has a strict request-
   response communication pattern that typically completes within
   seconds.

   Other applications may experience communication failures when
   middleboxes destroy the per-flow state associated with an application
   session during periods when the application does not exchange any UDP
   traffic.  Applications SHOULD be able to gracefully handle such
   communication failures and implement mechanisms to re-establish
   application-layer sessions and state.

   For some applications, such as media transmissions, this
   re-synchronization is highly undesirable, because it can cause user-
   perceivable playback artifacts.  Such specialized applications MAY
   send periodic keep-alive messages to attempt to refresh middlebox
   state (e.g., [RFC7675]).  It is important to note that keep-alive
   messages are not recommended for general use -- they are unnecessary
   for many applications and can consume significant amounts of system
   and network resources.

   An application that needs to employ keep-alive messages to deliver
   useful service over UDP in the presence of middleboxes SHOULD NOT
   transmit them more frequently than once every 15 seconds and SHOULD
   use longer intervals when possible.  No common timeout has been
   specified for per-flow UDP state for arbitrary middleboxes.  NATs
   require a state timeout of 2 minutes or longer [RFC4787].  However,
   empirical evidence suggests that a significant fraction of currently
   deployed middleboxes unfortunately use shorter timeouts.  The timeout
   of 15 seconds originates with the Interactive Connectivity
   Establishment (ICE) protocol [RFC5245].  When an application is
   deployed in a controlled environment, the deployer SHOULD investigate
   whether the target environment allows applications to use longer
   intervals, or whether it offers mechanisms to explicitly control
   middlebox state timeout durations, for example, using the Port
   Control Protocol (PCP) [RFC6887], Middlebox Communications (MIDCOM)
   [RFC3303], Next Steps in Signaling (NSIS) [RFC5973], or Universal
   Plug and Play (UPnP) [UPnP].  It is RECOMMENDED that applications
   apply slight random variations ("jitter") to the timing of keep-alive
   transmissions, to reduce the potential for persistent synchronization
   between keep-alive transmissions from different hosts [RFC7675].

Top      Up      ToC       Page 27 
   Sending keep-alive messages is not a substitute for implementing a
   mechanism to recover from broken sessions.  Like all UDP datagrams,
   keep-alive messages can be delayed or dropped, causing middlebox
   state to time out.  In addition, the congestion control guidelines in
   Section 3.1 cover all UDP transmissions by an application, including
   the transmission of middlebox keep-alive messages.  Congestion
   control may thus lead to delays or temporary suspension of keep-alive
   transmission.

   Keep-alive messages are NOT RECOMMENDED for general use.  They are
   unnecessary for many applications and may consume significant
   resources.  For example, on battery-powered devices, if an
   application needs to maintain connectivity for long periods with
   little traffic, the frequency at which keep-alive messages are sent
   can become the determining factor that governs power consumption,
   depending on the underlying network technology.

   Because many middleboxes are designed to require keep-alive messages
   for TCP connections at a frequency that is much lower than that
   needed for UDP, this difference alone can often be sufficient to
   prefer TCP over UDP for these deployments.  On the other hand, there
   is anecdotal evidence that suggests that direct communication through
   middleboxes, e.g., by using ICE [RFC5245], does succeed less often
   with TCP than with UDP.  The trade-offs between different transport
   protocols -- especially when it comes to middlebox traversal --
   deserve careful analysis.

   UDP applications that could be deployed in the Internet need to be
   designed understanding that there are many variants of middlebox
   behavior, and although UDP is connectionless, middleboxes often
   maintain state for each UDP flow.  Using multiple UDP flows can
   consume available state space and also can lead to changes in the way
   the middlebox handles subsequent packets (either to protect its
   internal resources, or to prevent perceived misuse).  The probability
   of path failure can increase when applications use multiple UDP flows
   in parallel (see Section 5.1.2 for recommendations on usage of
   multiple ports).

3.6.  Limited Applicability and Controlled Environments

   Two different types of applicability have been identified for the
   specification of IETF applications that utilize UDP:

   General Internet.  By default, IETF specifications target deployment
      on the general Internet.  Experience has shown that successful
      protocols developed in one specific context or for a particular
      application tend to become used in a wider range of contexts.  For
      example, a protocol with an initial deployment within a local area

Top      Up      ToC       Page 28 
      network may subsequently be used over a virtual network that
      traverses the Internet, or in the Internet in general.
      Applications designed for general Internet use may experience a
      range of network device behaviors and, in particular, should
      consider whether applications need to operate over paths that may
      include middleboxes.

   Controlled Environment.  A protocol/encapsulation/tunnel could be
      designed to be used only within a controlled environment.  For
      example, an application designed for use by a network operator
      might only be deployed within the network of that single network
      operator or on networks of an adjacent set of cooperating network
      operators.  The application traffic may then be managed to avoid
      congestion, rather than relying on built-in mechanisms, which are
      required when operating over the general Internet.  Applications
      that target a limited applicability use case may be able to take
      advantage of specific hardware (e.g., carrier-grade equipment) or
      underlying protocol features of the subnetwork over which they are
      used.

   Specifications addressing a limited applicability use case or a
   controlled environment SHOULD identify how, in their restricted
   deployment, a level of safety is provided that is equivalent to that
   of a protocol designed for operation over the general Internet (e.g.,
   a design based on extensive experience with deployments of particular
   methods that provide features that cannot be expected in general
   Internet equipment and the robustness of the design of MPLS to
   corruption of headers both helped justify use of an alternate UDP
   integrity check [RFC7510]).

   An IETF specification targeting a controlled environment is expected
   to provide an applicability statement that restricts the application
   traffic to the controlled environment, and it would be expected to
   describe how methods can be provided to discourage or prevent escape
   of corrupted packets from the environment (for example, Section 5 of
   [RFC7510]).

4.  Multicast UDP Usage Guidelines

   This section complements Section 3 by providing additional guidelines
   that are applicable to multicast and broadcast usage of UDP.

   Multicast and broadcast transmission [RFC1112] usually employ the UDP
   transport protocol, although they may be used with other transport
   protocols (e.g., UDP-Lite).

Top      Up      ToC       Page 29 
   There are currently two models of multicast delivery: the Any-Source
   Multicast (ASM) model as defined in [RFC1112] and the Source-Specific
   Multicast (SSM) model as defined in [RFC4607].  ASM group members
   will receive all data sent to the group by any source, while SSM
   constrains the distribution tree to only one single source.

   Specialized classes of applications also use UDP for IP multicast or
   broadcast [RFC919].  The design of such specialized applications
   requires expertise that goes beyond simple, unicast-specific
   guidelines, since these senders may transmit to potentially very many
   receivers across potentially very heterogeneous paths at the same
   time, which significantly complicates congestion control, flow
   control, and reliability mechanisms.

   This section provides guidance on multicast and broadcast UDP usage.
   Use of broadcast by an application is normally constrained by routers
   to the local subnetwork.  However, use of tunneling techniques and
   proxies can and does result in some broadcast traffic traversing
   Internet paths.  These guidelines therefore also apply to broadcast
   traffic.

   The IETF has defined a reliable multicast framework [RFC3048] and
   several building blocks to aid the designers of multicast
   applications, such as [RFC3738] or [RFC4654].

   Senders to anycast destinations must be aware that successive
   messages sent to the same anycast IP address may be delivered to
   different anycast nodes, i.e., arrive at different locations in the
   topology.

   Most UDP tunnels that carry IP multicast traffic use a tunnel
   encapsulation with a unicast destination address, such as Automatic
   Multicast Tunneling [RFC7450].  These MUST follow the same
   requirements as a tunnel carrying unicast data (see Section 3.1.11).
   There are deployment cases and solutions where the outer header of a
   UDP tunnel contains a multicast destination address, such as
   [RFC6513].  These cases are primarily deployed in controlled
   environments over reserved capacity, often operating within a single
   administrative domain, or between two domains over a bilaterally
   agreed upon path with reserved capacity, and so congestion control is
   OPTIONAL, but circuit breaker techniques are still RECOMMENDED in
   order to restore some degree of service should the offered load
   exceed the reserved capacity (e.g., due to misconfiguration).

Top      Up      ToC       Page 30 
4.1.  Multicast Congestion Control Guidelines

   Unicast congestion-controlled transport mechanisms are often not
   applicable to multicast distribution services, or simply do not scale
   to large multicast trees, since they require bidirectional
   communication and adapt the sending rate to accommodate the network
   conditions to a single receiver.  In contrast, multicast distribution
   trees may fan out to massive numbers of receivers, which limits the
   scalability of an in-band return channel to control the sending rate,
   and the one-to-many nature of multicast distribution trees prevents
   adapting the rate to the requirements of an individual receiver.  For
   this reason, generating TCP-compatible aggregate flow rates for
   Internet multicast data, either native or tunneled, is the
   responsibility of the application implementing the congestion
   control.

   Applications using multicast SHOULD provide appropriate congestion
   control.  Multicast congestion control needs to be designed using
   mechanisms that are robust to the potential heterogeneity of both the
   multicast distribution tree and the receivers belonging to a group.
   Heterogeneity may manifest itself in some receivers experiencing more
   loss that others, higher delay, and/or less ability to respond to
   network conditions.  Congestion control is particularly important for
   any multicast session where all or part of the multicast distribution
   tree spans an access network (e.g., a home gateway).  Two styles of
   congestion control have been defined in the RFC Series:

   o  Feedback-based congestion control, in which the sender receives
      multicast or unicast UDP messages from the receivers allowing it
      to assess the level of congestion and then adjust the sender
      rate(s) (e.g., [RFC5740],[RFC4654]).  Multicast methods may
      operate on longer timescales than for unicast (e.g., due to the
      higher group RTT of a heterogeneous group).  A control method
      could decide not to reduce the rate of the entire multicast group
      in response to a control message received from a single receiver
      (e.g., a sender could set a minimum rate and decide to request a
      congested receiver to leave the multicast group and could also
      decide to distribute content to these congested receivers at a
      lower rate using unicast congestion control).

   o  Receiver-driven congestion control, which does not require a
      receiver to send explicit UDP control messages for congestion
      control (e.g., [RFC3738], [RFC5775]).  Instead, the sender
      distributes the data across multiple IP multicast groups (e.g.,
      using a set of {S,G} channels).  Each receiver determines its own
      level of congestion and controls its reception rate using only
      multicast join/leave messages sent in the network control plane.
      This method scales to arbitrary large groups of receivers.

Top      Up      ToC       Page 31 
   Any multicast-enabled receiver may attempt to join and receive
   traffic from any group.  This may imply the need for rate limits on
   individual receivers or the aggregate multicast service.  Note, at
   the transport layer, there is no way to prevent a join message
   propagating to the next-hop router.

   Some classes of multicast applications support applications that can
   monitor the user-level quality of the transfer at the receiver.
   Applications that can detect a significant reduction in user quality
   SHOULD regard this as a congestion signal (e.g., to leave a group
   using layered multicast encoding); if not, they SHOULD use this
   signal to provide a circuit breaker to terminate the flow by leaving
   the multicast group.

4.1.1.  Bulk-Transfer Multicast Applications

   Applications that perform bulk transmission of data over a multicast
   distribution tree, i.e., applications that exchange more than a few
   UDP datagrams per RTT, SHOULD implement a method for congestion
   control.  The currently RECOMMENDED IETF methods are as follows:
   Asynchronous Layered Coding (ALC) [RFC5775], TCP-Friendly Multicast
   Congestion Control (TFMCC) [RFC4654], Wave and Equation Based Rate
   Control (WEBRC) [RFC3738], NACK-Oriented Reliable Multicast (NORM)
   transport protocol [RFC5740], File Delivery over Unidirectional
   Transport (FLUTE) [RFC6726], Real Time Protocol/Control Protocol
   (RTP/RTCP) [RFC3550].

   An application can alternatively implement another congestion control
   scheme following the guidelines of [RFC2887] and utilizing the
   framework of [RFC3048].  Bulk-transfer applications that choose not
   to implement [RFC4654], [RFC5775], [RFC3738], [RFC5740], [RFC6726],
   or [RFC3550] SHOULD implement a congestion control scheme that
   results in bandwidth use that competes fairly with TCP within an
   order of magnitude.

   Section 2 of [RFC3551] states that multimedia applications SHOULD
   monitor the packet-loss rate to ensure that it is within acceptable
   parameters.  Packet loss is considered acceptable if a TCP flow
   across the same network path under the same network conditions would
   achieve an average throughput, measured on a reasonable timescale,
   that is not less than that of the UDP flow.  The comparison to TCP
   cannot be specified exactly, but is intended as an "order-of-
   magnitude" comparison in timescale and throughput.

4.1.2.  Low Data-Volume Multicast Applications

   All the recommendations in Section 3.1.3 are also applicable to low
   data-volume multicast applications.

Top      Up      ToC       Page 32 
4.2.  Message Size Guidelines for Multicast

   A multicast application SHOULD NOT send UDP datagrams that result in
   IP packets that exceed the effective MTU as described in Section 3 of
   [RFC6807].  Consequently, an application SHOULD either use the
   effective MTU information provided by the "Population Count
   Extensions to Protocol Independent Multicast (PIM)" [RFC6807] or
   implement path MTU discovery itself (see Section 3.2) to determine
   whether the path to each destination will support its desired message
   size without fragmentation.

5.  Programming Guidelines

   The de facto standard application programming interface (API) for
   TCP/IP applications is the "sockets" interface [POSIX].  Some
   platforms also offer applications the ability to directly assemble
   and transmit IP packets through "raw sockets" or similar facilities.
   This is a second, more cumbersome method of using UDP.  The
   guidelines in this document cover all such methods through which an
   application may use UDP.  Because the sockets API is by far the most
   common method, the remainder of this section discusses it in more
   detail.

   Although the sockets API was developed for UNIX in the early 1980s, a
   wide variety of non-UNIX operating systems also implement it.  The
   sockets API supports both IPv4 and IPv6 [RFC3493].  The UDP sockets
   API differs from that for TCP in several key ways.  Because
   application programmers are typically more familiar with the TCP
   sockets API, this section discusses these differences.  [STEVENS]
   provides usage examples of the UDP sockets API.

   UDP datagrams may be directly sent and received, without any
   connection setup.  Using the sockets API, applications can receive
   packets from more than one IP source address on a single UDP socket.
   Some servers use this to exchange data with more than one remote host
   through a single UDP socket at the same time.  Many applications need
   to ensure that they receive packets from a particular source address;
   these applications MUST implement corresponding checks at the
   application layer or explicitly request that the operating system
   filter the received packets.

   Many operating systems also allow a UDP socket to be connected, i.e.,
   to bind a UDP socket to a specific pair of addresses and ports.  This
   is similar to the corresponding TCP sockets API functionality.
   However, for UDP, this is only a local operation that serves to
   simplify the local send/receive functions and to filter the traffic
   for the specified addresses and ports.  Binding a UDP socket does not
   establish a connection -- UDP does not notify the remote end when a

Top      Up      ToC       Page 33 
   local UDP socket is bound.  Binding a socket also allows configuring
   options that affect the UDP or IP layers, for example, use of the UDP
   checksum or the IP Timestamp option.  On some stacks, a bound socket
   also allows an application to be notified when ICMP error messages
   are received for its transmissions [RFC1122].

   If a client/server application executes on a host with more than one
   IP interface, the application SHOULD send any UDP responses with an
   IP source address that matches the IP destination address of the UDP
   datagram that carried the request (see [RFC1122], Section 4.1.3.5).
   Many middleboxes expect this transmission behavior and drop replies
   that are sent from a different IP address, as explained in
   Section 3.5.

   A UDP receiver can receive a valid UDP datagram with a zero-length
   payload.  Note that this is different from a return value of zero
   from a read() socket call, which for TCP indicates the end of the
   connection.

   UDP provides no flow-control, i.e., the sender at any given time does
   not know whether the receiver is able to handle incoming
   transmissions.  This is another reason why UDP-based applications
   need to be robust in the presence of packet loss.  This loss can also
   occur within the sending host, when an application sends data faster
   than the line rate of the outbound network interface.  It can also
   occur at the destination, where receive calls fail to return all the
   data that was sent when the application issues them too infrequently
   (i.e., such that the receive buffer overflows).  Robust flow control
   mechanisms are difficult to implement, which is why applications that
   need this functionality SHOULD consider using a full-featured
   transport protocol such as TCP.

   When an application closes a TCP, SCTP, or DCCP socket, the transport
   protocol on the receiving host is required to maintain TIME-WAIT
   state.  This prevents delayed packets from the closed connection
   instance from being mistakenly associated with a later connection
   instance that happens to reuse the same IP address and port pairs.
   The UDP protocol does not implement such a mechanism.  Therefore,
   UDP-based applications need to be robust to reordering and delay.
   One application may close a socket or terminate, followed in time by
   another application receiving on the same port.  This later
   application may then receive packets intended for the first
   application that were delayed in the network.

Top      Up      ToC       Page 34 
5.1.  Using UDP Ports

   The rules and procedures for the management of the "Service Name and
   Transport Protocol Port Number Registry" are specified in [RFC6335].
   Recommendations for use of UDP ports are provided in [RFC7605].

   A UDP sender SHOULD NOT use a source port value of zero.  A source
   port number that cannot be easily determined from the address or
   payload type provides protection at the receiver from data injection
   attacks by off-path devices.  A UDP receiver SHOULD NOT bind to port
   zero.

   Applications SHOULD implement receiver port and address checks at the
   application layer or explicitly request that the operating system
   filter the received packets to prevent receiving packets with an
   arbitrary port.  This measure is designed to provide additional
   protection from data injection attacks from an off-path source (where
   the port values may not be known).

   Applications SHOULD provide a check that protects from off-path data
   injection, avoiding an application receiving packets that were
   created by an unauthorized third party.  TCP stacks commonly use a
   randomized source port to provide this protection [RFC6056]; UDP
   applications should follow the same technique.  Middleboxes and end
   systems often make assumptions about the system ports or user ports;
   hence, it is recommended to use randomized ports in the Dynamic and/
   or Private Port range.  Setting a "randomized" source port also
   provides greater assurance that reported ICMP errors originate from
   network systems on the path used by a particular flow.  Some UDP
   applications choose to use a predetermined value for the source port
   (including some multicast applications), these applications need to
   therefore employ a different technique.  Protection from off-path
   data attacks can also be provided by randomizing the initial value of
   another protocol field within the datagram payload, and checking the
   validity of this field at the receiver (e.g., RTP has random initial
   sequence number and random media timestamp offsets [RFC3550]).

   When using multicast, IP routers perform a reverse-path forwarding
   (RPF) check for each multicast packet.  This provides protection from
   off-path data injection, restricting opportunities to forge a
   packet's source address.  When a receiver joins a multicast group and
   filters based on the source address the filter verifies the sender's
   IP address.  This is always the case when using an SSM {S,G} channel.

Top      Up      ToC       Page 35 
5.1.1.  Usage of UDP for Source Port Entropy and the IPv6 Flow Label

   Some applications use the UDP datagram header as a source of entropy
   for network devices that implement ECMP [RFC6438].  A UDP tunnel
   application targeting this usage encapsulates an inner packet using
   UDP, where the UDP source port value forms a part of the entropy that
   can be used to balance forwarding of network traffic by the devices
   that use ECMP.  A sending tunnel endpoint selects a source port value
   in the UDP datagram header that is computed from the inner flow
   information (e.g., the encapsulated packet headers).  To provide
   sufficient entropy, the sending tunnel endpoint maps the encapsulated
   traffic to one of a range of UDP source values.  The value SHOULD be
   within the ephemeral port range, i.e., 49152 to 65535, where the high
   order two bits of the port are set to one.  The available source port
   entropy of 14 bits (using the ephemeral port range) plus the outer IP
   addresses seems sufficient for entropy for most ECMP applications
   [ENCAP].

   To avoid reordering within an IP flow, the same UDP source port value
   SHOULD be used for all packets assigned to an encapsulated flow
   (e.g., using a hash of the relevant headers).  The entropy mapping
   for a flow MAY change over the lifetime of the encapsulated flow
   [ENCAP].  For instance, this could be changed as a Denial of Service
   (DOS) mitigation, or as a means to effect routing through the ECMP
   network.  However, the source port selected for a flow SHOULD NOT
   change more than once in every thirty seconds (e.g., as in
   [RFC8086]).

   The use of the source port field for entropy has several side effects
   that need to be considered, including:

   o  It can increase the probability of misdelivery of corrupted
      packets, which increases the need for checksum computation or an
      equivalent mechanism to protect other UDP applications from
      misdelivery errors Section 3.4.

   o  It is expected to reduce the probability of successful middlebox
      traversal Section 3.5.  This use of the source port field will
      often not be suitable for applications targeting deployment in the
      general Internet.

   o  It can prevent the field being usable to protect from off-path
      attacks (described in Section 5.1).  Designers therefore need to
      consider other mechanisms to provide equivalent protection (e.g.,
      to restrict use to a controlled environment [RFC7510]
      Section 3.6).

Top      Up      ToC       Page 36 
   The UDP source port number field has also been leveraged to produce
   entropy with IPv6.  However, in the case of IPv6, the "flow label"
   [RFC6437] may also alternatively be used to provide entropy for load
   balancing [RFC6438].  This use of the flow label for load balancing
   is consistent with the definition of the field, although further
   clarity was needed to ensure the field can be consistently used for
   this purpose.  Therefore, an updated IPv6 flow label [RFC6437] and
   ECMP routing [RFC6438] usage was specified.

   To ensure future opportunities to use the flow label, UDP
   applications SHOULD set the flow label field, even when an entropy
   value is also set in the source port field (e.g., An IPv6 tunnel
   endpoint could copy the source port flow entropy value to the IPv6
   flow label field [RFC8086]).  Router vendors are encouraged to start
   using the IPv6 flow label as a part of the flow hash, providing
   support for IP-level ECMP without requiring use of UDP.  The end-to-
   end use of flow labels for load balancing is a long-term solution.
   Even if the usage of the flow label has been clarified, there will be
   a transition time before a significant proportion of endpoints start
   to assign a good quality flow label to the flows that they originate.
   The use of load balancing using the transport header fields will
   likely continue until widespread deployment is finally achieved.

5.1.2.  Applications Using Multiple UDP Ports

   A single application may exchange several types of data.  In some
   cases, this may require multiple UDP flows (e.g., multiple sets of
   flows, identified by different five-tuples).  [RFC6335] recommends
   application developers not to apply to IANA to be assigned multiple
   well-known ports (user or system).  It does not discuss the
   implications of using multiple flows with the same well-known port or
   pairs of dynamic ports (e.g., identified by a service name or
   signaling protocol).

   Use of multiple flows can affect the network in several ways:

   o  Starting a series of successive connections can increase the
      number of state bindings in middleboxes (e.g., NAPT or Firewall)
      along the network path.  UDP-based middlebox traversal usually
      relies on timeouts to remove old state, since middleboxes are
      unaware when a particular flow ceases to be used by an
      application.

   o  Using several flows at the same time may result in seeing
      different network characteristics for each flow.  It cannot be
      assumed both follow the same path (e.g., when ECMP is used,
      traffic is intentionally hashed onto different parallel paths
      based on the port numbers).

Top      Up      ToC       Page 37 
   o  Using several flows can also increase the occupancy of a binding
      or lookup table in a middlebox (e.g., NAPT or Firewall), which may
      cause the device to change the way it manages the flow state.

   o  Further, using excessive numbers of flows can degrade the ability
      of a unicast congestion control to react to congestion events,
      unless the congestion state is shared between all flows in a
      session.  A receiver-driven multicast congestion control requires
      the sending application to distribute its data over a set of IP
      multicast groups, each receiver is therefore expected to receive
      data from a modest number of simultaneously active UDP ports.

   Therefore, applications MUST NOT assume consistent behavior of
   middleboxes when multiple UDP flows are used; many devices respond
   differently as the number of used ports increases.  Using multiple
   flows with different QoS requirements requires applications to verify
   that the expected performance is achieved using each individual flow
   (five-tuple), see Section 3.1.9.

5.2.  ICMP Guidelines

   Applications can utilize information about ICMP error messages that
   the UDP layer passes up for a variety of purposes [RFC1122].
   Applications SHOULD appropriately validate the payload of ICMP
   messages to ensure these are received in response to transmitted
   traffic (i.e., a reported error condition that corresponds to a UDP
   datagram actually sent by the application).  This requires context,
   such as local state about communication instances to each
   destination, that although readily available in connection-oriented
   transport protocols is not always maintained by UDP-based
   applications.  Note that not all platforms have the necessary APIs to
   support this validation, and some platforms already perform this
   validation internally before passing ICMP information to the
   application.

   Any application response to ICMP error messages SHOULD be robust to
   temporary routing failures (sometimes called "soft errors"), e.g.,
   transient ICMP "unreachable" messages ought to not normally cause a
   communication abort.

   ICMP messages are being increasingly filtered by middleboxes.  A UDP
   application therefore SHOULD NOT rely on their delivery for correct
   and safe operation.

Top      Up      ToC       Page 38 
6.  Security Considerations

   UDP does not provide communications security.  Applications that need
   to protect their communications against eavesdropping, tampering, or
   message forgery SHOULD employ end-to-end security services provided
   by other IETF protocols.

   UDP applications SHOULD provide protection from off-path data
   injection attacks using a randomized source port or equivalent
   technique (see Section 5.1).

   Applications that respond to short requests with potentially large
   responses are a potential vector for amplification attacks, and
   SHOULD take steps to minimize their potential for being abused as
   part of a DoS attack.  That could mean authenticating the sender
   before responding; noting that the source IP address of a request is
   not a useful authenticator, because it can easily be spoofed.  Or it
   may mean otherwise limiting the cases where short unauthenticated
   requests produce large responses.  Applications MAY also want to
   offer ways to limit the number of requests they respond to in a time
   interval, in order to cap the bandwidth they consume.

   One option for securing UDP communications is with IPsec [RFC4301],
   which can provide authentication for flows of IP packets through the
   Authentication Header (AH) [RFC4302] and encryption and/or
   authentication through the Encapsulating Security Payload (ESP)
   [RFC4303].  Applications use the Internet Key Exchange (IKE)
   [RFC7296] to configure IPsec for their sessions.  Depending on how
   IPsec is configured for a flow, it can authenticate or encrypt the
   UDP headers as well as UDP payloads.  If an application only requires
   authentication, ESP with no encryption but with authentication is
   often a better option than AH, because ESP can operate across
   middleboxes.  An application that uses IPsec requires the support of
   an operating system that implements the IPsec protocol suite, and the
   network path must permit IKE and IPsec traffic.  This may become more
   common with IPv6 deployments [RFC6092].

   Although it is possible to use IPsec to secure UDP communications,
   not all operating systems support IPsec or allow applications to
   easily configure it for their flows.  A second option for securing
   UDP communications is through Datagram Transport Layer Security
   (DTLS) [RFC6347][RFC7525].  DTLS provides communication privacy by
   encrypting UDP payloads.  It does not protect the UDP headers.
   Applications can implement DTLS without relying on support from the
   operating system.

Top      Up      ToC       Page 39 
   Many other options for authenticating or encrypting UDP payloads
   exist.  For example, the GSS-API security framework [RFC2743] or
   Cryptographic Message Syntax (CMS) [RFC5652] could be used to protect
   UDP payloads.  There exist a number of security options for RTP
   [RFC3550] over UDP, especially to accomplish key-management, see
   [RFC7201].  These options covers many usages, including point-to-
   point, centralized group communication as well as multicast.  In some
   applications, a better solution is to protect larger stand-alone
   objects, such as files or messages, instead of individual UDP
   payloads.  In these situations, CMS [RFC5652], S/MIME [RFC5751] or
   OpenPGP [RFC4880] could be used.  In addition, there are many
   non-IETF protocols in this area.

   Like congestion control mechanisms, security mechanisms are difficult
   to design and implement correctly.  It is hence RECOMMENDED that
   applications employ well-known standard security mechanisms such as
   DTLS or IPsec, rather than inventing their own.

   The Generalized TTL Security Mechanism (GTSM) [RFC5082] may be used
   with UDP applications when the intended endpoint is on the same link
   as the sender.  This lightweight mechanism allows a receiver to
   filter unwanted packets.

   In terms of congestion control, [RFC2309] and [RFC2914] discuss the
   dangers of congestion-unresponsive flows to the Internet.  [RFC8084]
   describes methods that can be used to set a performance envelope that
   can assist in preventing congestion collapse in the absence of
   congestion control or when the congestion control fails to react to
   congestion events.  This document provides guidelines to designers of
   UDP-based applications to congestion-control their transmissions, and
   does not raise any additional security concerns.

   Some network operators have experienced surges of UDP attack traffic
   that are multiple orders of magnitude above the baseline traffic rate
   for UDP.  This can motivate operators to limit the data rate or
   packet rate of UDP traffic.  This may in turn limit the throughput
   that an application can achieve using UDP and could also result in
   higher packet loss for UDP traffic that would not be experienced if
   other transport protocols had been used.

   A UDP application with a long-lived association between the sender
   and receiver, ought to be designed so that the sender periodically
   checks that the receiver still wants ("consents") to receive traffic
   and need to be designed to stop if there is no explicit confirmation
   of this [RFC7675].  Applications that require communications in two
   directions to implement protocol functions (such as reliability or

Top      Up      ToC       Page 40 
   congestion control) will need to independently check both directions
   of communication, and may have to exchange keep-alive messages to
   traverse middleboxes (see Section 3.5).



(page 40 continued on part 3)

Next Section