tech-invite   World Map     

IETF     RFCs     Groups     SIP     ABNFs    |    3GPP     Specs     Gloss.     Arch.     IMS     UICC    |    Misc.    |    search     info

RFC 3819


Advice for Internet Subnetwork Designers

Part 2 of 3, p. 14 to 36
Prev RFC Part       Next RFC Part


prevText      Top      Up      ToC       Page 14 
8.  Reliability and Error Control

   In the Internet architecture, the ultimate responsibility for error
   recovery is at the end points [SRC81].  The Internet may occasionally
   drop, corrupt, duplicate, or reorder packets, and the transport
   protocol (e.g., TCP) or application (e.g., if UDP is used as the
   transport protocol) must recover from these errors on an end-to-end
   basis [RFC3155].  Error recovery in the subnetwork is therefore
   justifiable only to the extent that it can enhance overall
   performance.  It is important to recognize that a subnetwork can go
   too far in attempting to provide error recovery services in the
   Internet environment.  Subnet reliability should be "lightweight",
   i.e., it only has to be "good enough", *not* perfect.

   In this section, we discuss how to analyze characteristics of a
   subnetwork to determine what is "good enough".  The discussion below
   focuses on TCP, which is the most widely-used transport protocol in
   the Internet.  It is widely believed (and is a stated goal within the
   IETF) that non-TCP transport protocols should attempt to be "TCP-
   friendly" and have many of the same performance characteristics.
   Thus, the discussion below should be applicable, even to portions of
   the Internet where TCP may not be the predominant protocol.

8.1.  TCP vs Link-Layer Retransmission

   Error recovery involves the generation and transmission of redundant
   information computed from user data.  Depending on how much redundant
   information is sent and how it is generated, the receiver can use it
   to reliably detect transmission errors, correct up to some maximum
   number of transmission errors, or both.  The general approach is
   known as Error Control Coding, or ECC.

   The use of ECC to detect transmission errors so that retransmissions
   (hopefully without errors) can be requested is widely known as "ARQ"
   (Automatic Repeat Request).

   When enough ECC information is available to permit the receiver to
   correct some transmission errors without a retransmission, the
   approach is known as Forward Error Correction (FEC).  Due to the
   greater complexity of the required ECC and the need to tailor its
   design to the characteristics of a specific modem and channel, FEC

Top      Up      ToC       Page 15 
   has traditionally been implemented in special-purpose hardware
   integral to a modem.  This effectively makes it part of the physical

   Unlike ARQ, FEC was rarely used for telecommunications outside of
   space links prior to the 1990s.  It is now nearly universal in
   telephone, cable and DSL modems, digital satellite links, and digital
   mobile telephones.  FEC is also heavily used in optical and magnetic
   storage where "retransmissions" are not possible.

   Some systems use hybrid combinations of ARQ layered atop FEC; V.90
   dialup modems (in the upstream direction) with V.42 error control are
   one example.  Most errors are corrected by the trellis (FEC) code
   within the V.90 modem, and most remaining errors are detected and
   corrected by the ARQ mechanisms in V.42.

   Work is now underway to apply FEC above the physical layer, primarily
   in connection with reliable multicasting [RFC3048] [RFC3450-RFC3453]
   where conventional ARQ mechanisms are inefficient or difficult to
   implement.  However, in this discussion, we will assume that if FEC
   is present, it is implemented within the physical layer.

   Depending on the layer in which it is implemented, error control can
   operate on an end-to-end basis or over a shorter span, such as a
   single link.  TCP is the most important example of an end-to-end
   protocol that uses an ARQ strategy.

   Many link-layer protocols use ARQ, usually some flavor of HDLC
   [ISO3309].  Examples include the X.25 link layer, the AX.25 protocol
   used in amateur packet radio, 802.11 wireless LANs, and the reliable
   link layer specified in IEEE 802.2.

   Only end-to-end error recovery can ensure reliable service to the
   application (see Section 8).  However, some subnetworks (e.g., many
   wireless links) also have link-layer error recovery as a performance
   enhancement [RFC3366].  For example, many cellular links have small
   physical frame sizes (< 100 bytes) and relatively high frame loss
   rates.  Relying solely on end-to-end error recovery can clearly yield
   a performance degradation, as retransmissions across the end-to-end
   path take much longer to be received than when link layer
   retransmissions are used.  Thus, link-layer error recovery can often
   increase end-to-end performance.  As a result, link-layer and end-
   to-end recovery often co-exist; this can lead to the possibility of
   inefficient interactions between the two layers of ARQ protocols.

   This inter-layer "competition" might lead to the following wasteful
   situation.  When the link layer retransmits (parts of) a packet, the
   link latency momentarily increases.  Since TCP bases its

Top      Up      ToC       Page 16 
   retransmission timeout on prior measurements of total end-to-end
   latency, including that of the link in question, this sudden increase
   in latency may trigger an unnecessary retransmission by TCP of a
   packet that the link layer is still retransmitting.  Such spurious
   end-to-end retransmissions generate unnecessary load and reduce end-
   to-end throughput.  As a result, the link layer may even have
   multiple copies of the same packet in the same link queue at the same
   time.  In general, one could say the competing error recovery is
   caused by an inner control loop (link-layer error recovery) reacting
   to the same signal as an outer control loop (end-to-end error
   recovery) without any coordination between the loops.  Note that this
   is solely an efficiency issue; TCP continues to provide reliable
   end-to-end delivery over such links.

   This raises the question of how persistent a link-layer sender should
   be in performing retransmission [RFC3366].  We define the link-layer
   (LL) ARQ persistency as the maximum time that a particular link will
   spend trying to transfer a packet before it can be discarded.  This
   deliberately simplified definition says nothing about the maximum
   number of retransmissions, retransmission strategies, queue sizes,
   queuing disciplines, transmission delays, or the like.  The reason we
   use the term LL ARQ persistency, instead of a term such as "maximum
   link-layer packet holding time," is that the definition closely
   relates to link-layer error recovery.  For example, on links that
   implement straightforward error recovery strategies, LL ARQ
   persistency will often correspond to a maximum number of
   retransmissions permitted per link-layer frame.

   For link layers that do not or cannot differentiate between flows
   (e.g., due to network layer encryption), the LL ARQ persistency
   should be small.  This avoids any harmful effects or performance
   degradation resulting from indiscriminate high persistence.  A
   detailed discussion of these issues is provided in [RFC3366].

   However, when a link layer can identify individual flows and apply
   ARQ selectively [LKJK02], then the link ARQ persistency should be
   high for a flow using reliable unicast transport protocols (e.g.,
   TCP) and must be low for all other flows.  Setting the link ARQ
   persistency larger than the largest link outage allows TCP to rapidly
   restore transmission without needing to wait for a retransmission
   time out.  This generally improves TCP performance in the face of
   transient outages.  However, excessively high persistence may be
   disadvantageous; a practical upper limit of 30-60 seconds may be
   desirable.  Implementation of such schemes remains a research issue.
   (See also the following section "Recovery from Subnetwork Outages").

Top      Up      ToC       Page 17 
   Many subnetwork designers have opportunities to reduce the
   probability of packet loss, e.g., with FEC, ARQ, and interleaving, at
   the cost of increased delay.  TCP performance improves with
   decreasing loss but worsens with increasing end-to-end delay, so it
   is important to find the proper balance through analysis and

8.2.  Recovery from Subnetwork Outages

   Some types of subnetworks, particularly mobile radio, are subject to
   frequent temporary outages.  For example, an active cellular data
   user may drive or walk into an area (such as a tunnel) that is out of
   range of any base station.  No packets will be delivered successfully
   until the user returns to an area with coverage.

   The Internet protocols currently provide no standard way for a
   subnetwork to explicitly notify an upper layer protocol (e.g., TCP)
   that it is experiencing an outage rather than severe congestion.

   Under these circumstances TCP will, after each unsuccessful
   retransmission, wait even longer before trying again; this is its
   "exponential back-off" algorithm.  Furthermore, TCP will not discover
   that the subnetwork outage has ended until its next retransmission
   attempt.  If TCP has backed off, this may take some time.  This can
   lead to extremely poor TCP performance over such subnetworks.

   It is therefore highly desirable that a subnetwork subject to outages
   does not silently discard packets during an outage.  Ideally, the
   subnetwork should define an interface to the next higher layer (i.e.,
   IP) that allows it to refuse packets during an outage, and to
   automatically ask IP for new packets when it is again able to deliver
   them.  If it cannot do this, then the subnetwork should hold onto at
   least some of the packets it accepts during an outage and attempt to
   deliver them when the outage ends.  When packets are discarded, IP
   should be notified so that the appropriate ICMP messages can be sent.

   Note that it is *not* necessary to completely avoid dropping packets
   during an outage.  The purpose of holding onto a packet during an
   outage, either in the subnetwork or at the IP layer, is so that its
   eventual delivery will implicitly notify TCP that the subnetwork is
   again operational.  This is to enhance performance, not to ensure
   reliability -- reliability, as discussed earlier, can only be ensured
   on an end-to-end basis.

   Only a few packets per TCP connection, including ACKs, need be held
   in this way to cause the TCP sender to recover from the additional
   losses once the flow resumes [RFC3366].

Top      Up      ToC       Page 18 
   Because it would be a layering violation (and possibly a performance
   hit) for IP or a subnetwork layer to look at TCP headers (which would
   in any event be impossible if IPsec encryption [RFC2401] is in use),
   it would be reasonable for the IP or subnetwork layers to choose, as
   a design parameter, some small number of packets that will be
   retained during an outage.

8.3.  CRCs, Checksums and Error Detection

   The TCP [RFC793], UDP [RFC768], ICMP, and IPv4 [RFC791] protocols all
   use the same simple 16-bit 1's complement checksum algorithm
   [RFC1071] to detect corrupted packets.  The IPv4 header checksum
   protects only the IPv4 header, while the TCP, ICMP, and UDP checksums
   provide end-to-end error detection for both the transport pseudo
   header (including network and transport layer information) and the
   transport payload data.  Protection of the data is optional for
   applications using UDP [RFC768] for IPv4, but is required for IPv6.

   The Internet checksum is not very strong from a coding theory
   standpoint, but it is easy to compute in software, and various
   proposals to replace the Internet checksums with stronger checksums
   have failed.  However, it is known that undetected errors can and do
   occur in packets received by end hosts [SP2000].

   To reduce processing costs, IPv6 has no IP header checksum.  The
   destination host detects "important" errors in the IP header, such as
   the delivery of the packet to the wrong destination.  This is done by
   including the IP source and destination addresses (pseudo header) in
   the computation of the checksum in the TCP or UDP header, a practice
   already performed in IPv4.  Errors in other IPv6 header fields may go
   undetected within the network; this was considered a reasonable price
   to pay for a considerable reduction in the processing required by
   each router, and it was assumed that subnetworks would use a strong
   link CRC.

   One way to provide additional protection for an IPv4 or IPv6 header
   is by the authentication and packet integrity services of the IP
   Security (IPsec) protocol [RFC2401].  However, this may not be a
   choice available to the subnetwork designer.

   Most subnetworks implement error detection just above the physical
   layer.  Packets corrupted in transmission are detected and discarded
   before delivery to the IP layer.  A 16-bit cyclic redundancy check
   (CRC) is usually the minimum for error detection.  This is
   significantly more robust against most patterns of errors than the
   16-bit Internet checksum.  Note that the error detection properties
   of a specific CRC code diminish with increasing frame size.  The
   Point-to-Point Protocol [RFC1662] requires support of a 16-bit CRC

Top      Up      ToC       Page 19 
   for each link frame, with a 32-bit CRC as an option.  (PPP is often
   used in conjunction with a dialup modem, which provides its own error
   control).  Other subnetworks, including 802.3/Ethernet, AAL5/ATM,
   FDDI, Token Ring, and PPP over SONET/SDH all use a 32-bit CRC.  Many
   subnetworks can also use other mechanisms to enhance the error
   detection capability of the link CRC (e.g., FEC in dialup modems,
   mobile radio and satellite channels).

   Any new subnetwork designed to carry IP should therefore provide
   error detection for each IP packet that is at least as strong as the
   32-bit CRC specified in [ISO3309].  While this will achieve a very
   low undetected packet error rate due to transmission errors, it will
   not (and need not) achieve a very low packet loss rate as the
   Internet protocols are better suited to dealing with lost packets
   than to dealing with corrupted packets [SRC81].

   Packet corruption may be, and is, also caused by bugs in host and
   router hardware and software.  Even if every subnetwork implemented
   strong error detection, it is still essential that end-to-end
   checksums are used at the receiving end host [SP2000].

   Designers of complex subnetworks consisting of internal links and
   packet switches should consider implementing error detection on an
   edge-to-edge basis to cover an entire SNDU (or IP packet).  A CRC
   would be generated at the entry point to the subnetwork and checked
   at the exit endpoint.  This may be used instead of, or in combination
   with, error detection at the interface to each physical link.  An
   edge-to-edge check has the significant advantage of protecting
   against errors introduced anywhere within the subnetwork, not just
   within its transmission links.  Examples of this approach include the
   way in which the Ethernet CRC-32 is handled by LAN bridges [802.1D].
   ATM AAL5 [ITU-I363] also uses an edge-to-edge CRC-32.

   Some specific applications may be tolerant of residual errors in the
   data they exchange, but removal of the link CRC may expose the
   network to an undesirable increase in undetected errors in the IP and
   transport headers.  Applications may also require a high level of
   error protection for control information exchanged by protocols
   acting above the transport layer.  One example is a voice codec,
   which is robust against bit errors in the speech samples.  For such
   mechanisms to work, the receiving application must be able to
   tolerate receiving corrupted data.  This also requires that an
   application uses a mechanism to signal that payload corruption is
   permitted and to indicate the coverage (headers and data) required to
   be protected by the subnetwork CRC.  The UDP-Lite protocol [RFC3828]
   is the first Internet standards track transport protocol supporting
   partial payload protection.  Receipt of corrupt data by arbitrary

Top      Up      ToC       Page 20 
   application protocols carries a serious danger that a subnet delivers
   data with errors that remain undetected by the application and hence
   corrupt the communicated data [SRC81].

8.4.  How TCP Works

   One of TCP's functions is end-host based congestion control for the
   Internet.  This is a critical part of the overall stability of the
   Internet, so it is important that link-layer designers understand
   TCP's congestion control algorithms.

   TCP assumes that, at the most abstract level, the network consists of
   links and queues.  Queues provide output-buffering on links that are
   momentarily oversubscribed.  They smooth instantaneous traffic bursts
   to fit the link bandwidth.  When demand exceeds link capacity long
   enough to fill the queue, packets must be dropped.  The traditional
   action of dropping the most recent packet ("tail dropping") is no
   longer recommended [RFC2309] [RFC2914], but it is still widely

   TCP uses sequence numbering and acknowledgments (ACKs) on an
   end-to-end basis to provide reliable, sequenced delivery.  TCP ACKs
   are cumulative, i.e., each implicitly ACKs every segment received so
   far.  If a packet with an unexpected sequence number is received, the
   ACK field in the packets returned by the receiver will cease to
   advance.  Using an optional enhancement, TCP can send selective
   acknowledgments (SACKs) [RFC2018] to indicate which segments have
   arrived at the receiver.

   Since the most common cause of packet loss is congestion, TCP treats
   packet loss as an indication of potential Internet congestion along
   the path between TCP end hosts.  This happens automatically, and the
   subnetwork need not know anything about IP or TCP.  A subnetwork node
   simply drops packets whenever it must, though some packet-dropping
   strategies (e.g., RED) are more fair to competing flows than others.

   TCP recovers from packet losses in two different ways.  The most
   important mechanism is the retransmission timeout.  If an ACK fails
   to arrive after a certain period of time, TCP retransmits the oldest
   unacked packet.  Taking this as a hint that the network is congested,
   TCP waits for the retransmission to be ACKed before it continues, and
   it gradually increases the number of packets in flight as long as a
   timeout does not occur again.

   A retransmission timeout can impose a significant performance
   penalty, as the sender is idle during the timeout interval and
   restarts with a congestion window of one TCP segment following the

Top      Up      ToC       Page 21 
   timeout.  To allow faster recovery from the occasional lost packet in
   a bulk transfer, an alternate scheme, known as "fast recovery", was
   introduced [RFC2581] [RFC2582] [RFC2914] [TCPF98].

   Fast recovery relies on the fact that when a single packet is lost in
   a bulk transfer, the receiver continues to return ACKs to subsequent
   data packets that do not actually acknowledge any newly-received
   data.  These are known as "duplicate acknowledgments" or "dupacks".
   The sending TCP can use dupacks as a hint that a packet has been lost
   and retransmit it without waiting for a timeout.  Dupacks effectively
   constitute a negative acknowledgment (NAK) for the packet sequence
   number in the acknowledgment field.  TCP waits until a certain number
   of dupacks (currently 3) are seen prior to assuming a loss has
   occurred; this helps avoid an unnecessary retransmission during
   out-of-sequence delivery.

   A technique called "Explicit Congestion Notification" (ECN) [RFC3168]
   allows routers to directly signal congestion to hosts without
   dropping packets.  This is done by setting a bit in the IP header.
   Since ECN support is likely to remain optional, the lack of an ECN
   bit must *never* be interpreted as a lack of congestion.  Thus, for
   the foreseeable future, TCP must interpret a lost packet as a signal
   of congestion.

   The TCP "congestion avoidance" [RFC2581] algorithm maintains a
   congestion window (cwnd) controlling the amount of data TCP may have
   in flight at any moment.  Reducing cwnd reduces the overall bandwidth
   obtained by the connection; similarly, raising cwnd increases
   performance, up to the limit of the available capacity.

   TCP probes for available network capacity by initially setting cwnd
   to one or two packets and then increasing cwnd by one packet for each
   ACK returned from the receiver.  This is TCP's "slow start"
   mechanism.  When a packet loss is detected (or congestion is signaled
   by other mechanisms), cwnd is reset to one and the slow start process
   is repeated until cwnd reaches one half of its previous setting
   before the reset.  Cwnd continues to increase past this point, but at
   a much slower rate than before.  If no further losses occur, cwnd
   will ultimately reach the window size advertised by the receiver.

   This is an "Additive Increase, Multiplicative Decrease" (AIMD)
   algorithm.  The steep decrease of cwnd in response to congestion
   provides for network stability; the AIMD algorithm also provides for
   fairness between long running TCP connections sharing the same path.

Top      Up      ToC       Page 22 
8.5.  TCP Performance Characteristics


   Here we present a current "state-of-the-art" understanding of TCP
   performance.  This analysis attempts to characterize the performance
   of TCP connections over links of varying characteristics.

   Link designers may wish to use the techniques in this section to
   predict what performance TCP/IP may achieve over a new link-layer
   design.  Such analysis is encouraged.  Because this is a relatively
   new analysis, and the theory is based on single-stream TCP
   connections under "ideal" conditions, it should be recognized that
   the results of such analysis may differ from actual performance in
   the Internet.  That being said, we have done our best to provide the
   designers with helpful information to get an accurate picture of the
   capabilities and limitations of TCP under various conditions.

8.5.1.  The Formulae

   The performance of TCP's AIMD Congestion Avoidance algorithm has been
   extensively analyzed.  The current best formula for the performance
   of the specific algorithms used by Reno TCP (i.e., the TCP specified
   in [RFC2581]) is given by Padhye, et al. [PFTK98].  This formula is:

           BW = --------------------------------------------------------
                RTT*sqrt(1.33*p) + RTO*p*[1+32*p^2]*min[1,3*sqrt(.75*p)]


           BW   is the maximum TCP throughout achievable by an
                individual TCP flow
           MSS  is the TCP segment size being used by the connection
           RTT  is the end-to-end round trip time of the TCP connection
           RTO  is the packet timeout (based on RTT)
           p    is the packet loss rate for the path
                (i.e., .01 if there is 1% packet loss)

   Note that the speed of the links making up the Internet path does not
   explicitly appear in this formula.  Attempting to send faster than
   the slowest link in the path causes the queue to grow at the
   transmitter driving the bottleneck.  This increases the RTT, which in
   turn reduces the achievable throughput.

   This is currently considered to be the best approximate formula for
   Reno TCP performance.  A further simplification of this formula is
   generally made by assuming that RTO is approximately 5*RTT.

Top      Up      ToC       Page 23 
   TCP is constantly being improved.  A simpler formula, which gives an
   upper bound on the performance of any AIMD algorithm which is likely
   to be implemented in TCP in the future, was derived by Ott, et al.

                     MSS   1
           BW = C    --- -------
                     RTT sqrt(p)

   where C is 0.93.

8.5.2.  Assumptions

   Both formulae assume that the TCP Receiver Window is not limiting the
   performance of the connection.  Because the receiver window is
   entirely determined by end-hosts, we assume that hosts will maximize
   the announced receiver window to maximize their network performance.

   Both of these formulae allow BW to become infinite if there is no
   loss.  However, an Internet path will drop packets at bottlenecked
   queues if the load is too high.  Thus, a completely lossless TCP/IP
   network can never occur (unless the network is being underutilized).

   The RTT used is the arithmetic average, including queuing delays.

   The formulae are for a single TCP connection.  If a path carries many
   TCP connections, each will follow the formulae above independently.

   The formulae assume long-running TCP connections.  For connections
   that are extremely short (<10 packets) and don't lose any packets,
   performance is driven by the TCP slow-start algorithm.  For
   connections of medium length, where on average only a few segments
   are lost, single connection performance will actually be slightly
   better than given by the formulae above.

   The difference between the simple and complex formulae above is that
   the complex formula includes the effects of TCP retransmission
   timeouts.  For very low levels of packet loss (significantly less
   than 1%), timeouts are unlikely to occur, and the formulae lead to
   very similar results.  At higher packet losses (1% and above), the
   complex formula gives a more accurate estimate of performance (which
   will always be significantly lower than the result from the simple

   Note that these formulae break down as p approaches 100%.

Top      Up      ToC       Page 24 
8.5.3.  Analysis of Link-Layer Effects on TCP Performance

   Consider the following example:

   A designer invents a new wireless link layer which, on average, loses
   1% of IP packets.  The link layer supports packets of up to 1040
   bytes, and has a one-way delay of 20 msec.

   If this link were to be used on an Internet path with a round trip
   time greater than 80ms, the upper bound may be computed by:

   For MSS, use 1000 bytes to exclude the 40 bytes of minimum IPv4 and
   TCP headers.

   For RTT, use 120 msec (80 msec for the Internet part, plus 20 msec
   each way for the new wireless link).

   For p, use .01.  For C, assume 1.

   The simple formula gives:

      BW = (1000 * 8 bits) / (.120 sec * sqrt(.01)) = 666 kbit/sec

   The more complex formula gives:

      BW = 402.9 kbit/sec

   If this were a 2 Mb/s wireless LAN, the designers might be somewhat

   Some observations on performance:

   1.  We have assumed that the packet losses on the link layer are
       interpreted as congestion by TCP.  This is a "fact of life" that
       must be accepted.

   2.  The equations for TCP performance are all expressed in terms of
       packet loss, but many subnetwork designers think in terms of
       bit-error ratio.  *If* channel bit errors are independent, then
       the probability of a packet being corrupted is:

         p = 1 - ([1 - BER]^[FRAME_SIZE*8])

       Here we assume FRAME_SIZE is in bytes and "^" represents
       exponentiation.  It includes the user data and all headers
       (TCP,IP and subnetwork).  (Note: this analysis assumes the

Top      Up      ToC       Page 25 
       subnetwork does not perform ARQ or transparent fragmentation
       [RFC3366].)  If the inequality

         BER * [FRAME_SIZE*8] << 1

       holds, the packet loss probability p can be approximated by:

         p = BER * [FRAME_SIZE*8]

       These equations can be used to apply BER to the performance
       equations above.

       Note that FRAME_SIZE can vary from one packet to the next.  Small
       packets (such as TCP acks) generally have a smaller probability
       of packet error than, say, a TCP packet carrying one MSS (maximum
       segment size) of user data.  A flow of small TCP acks can be
       expected to be slightly more reliable than a stream of larger TCP
       data segments.

       It bears repeating that the above analysis assumes that bit
       errors are statistically independent.  Because this is not true
       for many real links, our computation of p is actually an upper
       bound, not the exact probability of packet loss.

       There are many reasons why bit errors are not independent on real
       links.  Many radio links are affected by propagation fading or by
       interference that lasts over many bit times.  Also, links with
       Forward Error Correction (FEC) generally have very non-uniform
       bit error distributions that depend on the type of FEC, but in
       general the uncorrected errors tend to occur in bursts even when
       channel symbol errors are independent.  In all such cases, our
       computation of p from BER can only place an upper limit on the
       packet loss rate.

       If the distribution of errors under the FEC scheme is known, one
       could apply the same type of analysis as above, using the correct
       distribution function for the BER.  It is more likely in these
       FEC cases, however, that empirical methods are needed to
       determine the actual packet loss rate.

   3.  Note that the packet size plays an important role.  If the
       subnetwork loss characteristics are such that large packets have
       the same probability of loss as smaller packets, then larger
       packets will yield improved performance.

Top      Up      ToC       Page 26 
   4.  We have chosen a specific RTT that might occur on a wide-area
       Internet path within the USA.  It is important to recognize that
       a variety of RTT values are experienced in the Internet.

       For example, RTTs are typically less than 10 msec in a wired LAN
       environment when communicating with a local host.  International
       connections may have RTTs of 200 msec or more.  Modems and other
       low-capacity links can add considerable delay due to their long
       packet transmission (serialisation) times.

       Links over geostationary repeater satellites have one-way speed-
       of-light delays of around 250ms, a minimum of 125ms propagation
       delay up to the satellite and 125ms down.  The RTT of an end-to-
       end TCP connection that includes such a link can be expected to
       be greater than 250ms.

       Queues on heavily-congested links may back up, increasing RTTs.
       Finally, virtual private networks (VPNs) and other forms of
       encryption and tunneling can add significant end-to-end delay to
       network connections.

9.  Quality-of-Service (QoS) considerations

   It is generally recognized that specific service guarantees are
   needed to support real-time multimedia, toll-quality telephony, and
   other performance-critical applications.  The provision of such
   Quality of Service guarantees in the Internet is an active area of
   research and standardization.  The IETF has not converged on a single
   service model, set of services, or single mechanism that will offer
   useful guarantees to applications and be scalable to the Internet.
   Indeed, the IETF does not have a single definition of Quality of
   Service.  [RFC2990] represents a current understanding of the
   challenges in architecting QoS for the Internet.

   There are presently two architectural approaches to providing
   mechanisms for QoS support in the Internet.

   IP Integrated Services (Intserv) [RFC1633] provides fine-grained
   service guarantees to individual flows.  Flows are identified by a
   flow specification (flowspec), which creates a stateful association
   between individual packets by matching fields in the packet header.
   Capacity is reserved for the flow, and appropriate traffic
   conditioning and scheduling is installed in routers along the path.
   The ReSerVation Protocol (RSVP) [RFC2205] [RFC2210] is usually, but
   need not necessarily be, used to install the flow QoS state.  Intserv
   defines two services, in addition to the Default (best effort)

Top      Up      ToC       Page 27 
   1.  Guaranteed Service (GS) [RFC2212] offers hard upper bounds on
       delay to flows that conform to a traffic specification (TSpec).
       It uses a fluid-flow model to relate the TSpec and reserved
       bandwidth (RSpec) to variable delay.  Non-conforming packets are
       forwarded on a best-effort basis.

   2.  Controlled Load Service (CLS) [RFC2211] offers delay and packet
       loss equivalent to that of an unloaded network to flows that
       conform to a TSpec, but no hard bounds.  Non-conforming packets
       are forwarded on a best-effort basis.

   Intserv requires installation of state information in every
   participating router.  Performance guarantees cannot be made unless
   this state is present in every router along the path.  This, along
   with RSVP processing and the need for usage-based accounting, is
   believed to have scalability problems, particularly in the core of
   the Internet [RFC2208].

   IP Differentiated Services (Diffserv) [RFC2475] provides a "toolkit"
   offering coarse-grained controls to aggregates of flows.  Diffserv in
   itself does *not* provide QoS guarantees, but can be used to
   construct services with QoS guarantees across a Diffserv domain.
   Diffserv attempts to address the scaling issues associated with
   Intserv by requiring state awareness only at the edge of a Diffserv
   domain.  At the edge, packets are classified into flows, and the
   flows are conditioned (marked, policed, or shaped) to a traffic
   conditioning specification (TCS).  A Diffserv Codepoint (DSCP),
   identifying a per-hop behavior (PHB), is set in each packet header.
   The DSCP is carried in the DS-field, subsuming six bits of the former
   Type-of-Service (ToS) byte [RFC791] of the IP header [RFC2474].   The
   PHB denotes the forwarding behavior to be applied to the packet in
   each node in the Diffserv domain.  Although there is a "recommended"
   DSCP associated with each PHB, the mappings from DSCPs to PHBs are
   defined by the DS-domain.  In fact, there can be several DSCPs
   associated with the same PHB.  Diffserv presently defines three PHBs.

   1.  The class selector PHB [RFC2474] replaces the IP precedence field
       of the former ToS byte.  It offers relative forwarding

   2.  The Expedited Forwarding (EF) PHB [RFC3246] [RFC3248] guarantees
       that packets will have a well-defined minimum departure rate
       which, if not exceeded, ensures that the associated queues are
       short or empty.  EF is intended to support services that offer
       tightly-bounded loss, delay, and delay jitter.

Top      Up      ToC       Page 28 
   3.  The Assured Forwarding (AF) PHB group [RFC2597] offers different
       levels of forwarding assurance for each aggregated flow of
       packets.  Each AF group is independently allocated forwarding
       resources.  Packets are marked with one of three drop
       precedences; those with the highest drop precedence are dropped
       with lower probability than those marked with the lowest drop
       precedence.  DSCPs are recommended for four independent AF
       groups, although a DS domain can have more or fewer AF groups.

   Ongoing work in the IETF is addressing ways to support Intserv with
   Diffserv.  There is some belief (e.g., as expressed in [RFC2990])
   that such an approach will allow individual flows to receive service
   guarantees and scale to the global Internet.

   The QoS guarantees that can be offered by the IP layer are a product
   of two factors:

   1.  the concatenation of the QoS guarantees offered by the subnets
       along the path of a flow.  This implies that a subnet may wish to
       offer multiple services (with different QoS guarantees) to the IP
       layer, which can then determine which flows use which subnet
       service.  To put it another way, forwarding behavior in the
       subnet needs to be "clued" by the forwarding behavior (service or
       PHB) at the IP layer, and

   2.  the operation of a set of cooperating mechanisms, such as
       bandwidth reservation and admission control, policy management,
       traffic classification, traffic conditioning (marking, policing
       and/or shaping), selective discard, queuing, and scheduling.
       Note that support for QoS in subnets may require similar
       mechanisms, especially when these subnets are general topology
       subnets (e.g., ATM, frame relay, or MPLS) or shared media

   Many subnetwork designers face inherent tradeoffs between delay,
   throughput, reliability, and cost.  Other subnetworks have parameters
   that manage bandwidth, internal connection state, and the like.
   Therefore, the following subnetwork capabilities may be desirable,
   although some might be trivial or moot if the subnet is a dedicated
   point-to-point link.

   1.  The subnetwork should have the ability to reserve bandwidth for a
       connection or flow and schedule packets accordingly.

   2.  Bandwidth reservations should be based on a one- or two-token
       bucket model, depending on whether the service is intended to
       support constant-rate or bursty traffic.

Top      Up      ToC       Page 29 
   3.  If a connection or flow does not use its reserved bandwidth at a
       given time, the unused bandwidth should be available for other

   4.  Packets in excess of a connection or flow's agreed rate should be
       forwarded as best-effort or discarded, depending on the service
       offered by the subnet to the IP layer.

   5.  If a subnet contains error control mechanisms (retransmission
       and/or FEC), it should be possible for the IP layer to influence
       the inherent tradeoffs between uncorrected errors, packet losses,
       and delay.  These capabilities at the subnet/IP layer service
       boundary correspond to selection of more or less error control
       and/or to selection of particular error control mechanisms within
       the subnetwork.

   6.  The subnet layer should know, and be able to inform the IP layer,
       how much fixed delay and delay jitter it offers for a flow or
       connection.  If the Intserv model is used, the delay jitter
       component may be best expressed in terms of the TSpec/RSpec model
       described in [RFC2212].

   7.  Support of the Diffserv class selectors [RFC2474] suggests that
       the subnet might consider mechanisms that support priorities.

10.  Fairness vs Performance

   Subnetwork designers should be aware of the tradeoffs between
   fairness and efficiency inherent in many transmission scheduling
   algorithms.  For example, many local area networks use contention
   protocols to resolve access to a shared transmission channel.  These
   protocols represent overhead.  While limiting the amount of data that
   a subnet node may transmit per contention cycle helps assure timely
   access to the channel for each subnet node, it also increases
   contention overhead per unit of data sent.

   In some mobile radio networks, capacity is limited by interference,
   which in turn depends on average transmitter power.  Some receivers
   may require considerably more transmitter power (generating more
   interference and consuming more channel capacity) than others.

   In each case, the scheduling algorithm designer must balance
   competing objectives: providing a fair share of capacity to each
   subnet node while maximizing the total capacity of the network.  One
   approach for balancing performance and fairness is outlined in

Top      Up      ToC       Page 30 
11.  Delay Characteristics

   The TCP sender bases its retransmission timeout (RTO) on measurements
   of the round trip delay experienced by previous packets.  This allows
   TCP to adapt automatically to the very wide range of delays found on
   the Internet.  The recommended algorithms are described in [RFC2988].
   Evaluations of TCP's retransmission timer can be found in [AP99] and

   These algorithms model the delay along an Internet path as a
   normally-distributed random variable with a slowly-varying mean and
   standard deviation.  TCP estimates these two parameters by
   exponentially smoothing individual delay measurements, and it sets
   the RTO to the estimated mean delay plus some fixed number of
   standard deviations.  (The algorithm actually uses mean deviation as
   an approximation to standard deviation, because it is easier to

   The goal is to compute an RTO that is small enough to detect and
   recover from packet losses while minimizing unnecessary ("spurious")
   retransmissions when packets are unexpectedly delayed but not lost.
   Although these goals conflict, the algorithm works well when the
   delay variance along the Internet path is low, or the packet loss
   rate is low.

   If the path delay variance is high, TCP sets an RTO that is much
   larger than the mean of the measured delays.  If the packet loss rate
   is low, the large RTO is of little consequence, as timeouts occur
   only rarely.  Conversely, if the path delay variance is low, then TCP
   recovers quickly from lost packets; again, the algorithm works well.
   However, when delay variance and the packet loss rate are both high,
   these algorithms perform poorly, especially when the mean delay is
   also high.

   Because TCP uses returning acknowledgments as a "clock" to time the
   transmission of additional data, excessively high delays (even if the
   delay variance is low) also affect TCP's ability to fully utilize a
   high-speed transmission pipe.  It also slows the recovery of lost
   packets, even when delay variance is small.

   Subnetwork designers should therefore minimize all three parameters
   (delay, delay variance, and packet loss) as much as possible.

   In many subnetworks, these parameters are inherently in conflict.
   For example, on a mobile radio channel, the subnetwork designer can
   use retransmission (ARQ) and/or forward error correction (FEC) to
   trade off delay, delay variance, and packet loss in an effort to
   improve TCP performance.  While ARQ increases delay variance, FEC

Top      Up      ToC       Page 31 
   does not.  However, FEC (especially when combined with interleaving)
   often increases mean delay, even on good channels where ARQ
   retransmissions are not needed and ARQ would not increase either the
   delay or the delay variance.

   The tradeoffs among these error control mechanisms and their
   interactions with TCP can be quite complex, and are the subject of
   much ongoing research.  We therefore recommend that subnetwork
   designers provide as much flexibility as possible in the
   implementation of these mechanisms, and provide access to them as
   discussed above in the section on Quality of Service.

12.  Bandwidth Asymmetries

   Some subnetworks may provide asymmetric bandwidth (or may cause TCP
   packet flows to experience asymmetry in the capacity) and the
   Internet protocol suite will generally still work fine.  However,
   there is a case when such a scenario reduces TCP performance.  Since
   TCP data segments are "clocked" out by returning acknowledgments, TCP
   senders are limited by the rate at which ACKs can be returned
   [BPK98].  Therefore, when the ratio of the available capacity of the
   Internet path carrying the data to the bandwidth of the return path
   of the acknowledgments is too large, the slow return of the ACKs
   directly impacts performance.  Since ACKs are generally smaller than
   data segments, TCP can tolerate some asymmetry, but as a general
   rule, designers of subnetworks should be aware that subnetworks with
   significant asymmetry can result in reduced performance, unless
   issues are taken to mitigate this [RFC3449].

   Several strategies have been identified for reducing the impact of
   asymmetry of the network path between two TCP end hosts, e.g.,
   [RFC3449].  These techniques attempt to reduce the number of ACKs
   transmitted over the return path (low bandwidth channel) by changes
   at the end host(s), and/or by modification of subnetwork packet
   forwarding.  While these solutions may mitigate the performance
   issues caused by asymmetric subnetworks, they do have associated cost
   and may have other implications.  A fuller discussion of strategies
   and their implications is provided in [RFC3449].

13.  Buffering, flow and congestion control

   Many subnets include multiple links with varying traffic demands and
   possibly different transmission speeds.  At each link there must be a
   queuing system, including buffering, scheduling, and a capability to
   discard excess subnet packets.  These queues may also be part of a
   subnet flow control or congestion control scheme.

Top      Up      ToC       Page 32 
   For the purpose of this discussion, we talk about packets without
   regard to whether they refer to a complete IP packet or a subnetwork
   frame.  At each queue, a packet experiences a delay that depends on
   competing traffic and the scheduling discipline, and is subjected to
   a local discarding policy.

   Some subnets may have flow or congestion control mechanisms in
   addition to packet dropping.  Such mechanisms can operate on
   components in the subnet layer, such as schedulers, shapers, or
   discarders, and can affect the operation of IP forwarders at the
   edges of the subnet.  However, with the exception of Explicit
   Congestion Notification [RFC3168] (discussed below), IP has no way to
   pass explicit congestion or flow control signals to TCP.

   TCP traffic, especially aggregated TCP traffic, is bursty.  As a
   result, instantaneous queue depths can vary dramatically, even in
   nominally stable networks.  For optimal performance, packets should
   be dropped in a controlled fashion, not just when buffer space is
   unavailable.  How much buffer space should be supplied is still a
   matter of debate, but as a rule of thumb, each node should have
   enough buffering to hold one link_bandwidth*link_delay product's
   worth of data for each TCP connection sharing the link.

   This is often difficult to estimate, since it depends on parameters
   beyond the subnetwork's control or knowledge.  Internet nodes
   generally do not implement admission control policies, and cannot
   limit the number of TCP connections that use them.  In general, it is
   wise to err in favor of too much buffering rather than too little.
   It may also be useful for subnets to incorporate mechanisms that
   measure propagation delays to assist in buffer sizing calculations.

   There is a rough consensus in the research community that active
   queue management is important to improving fairness, link
   utilization, and throughput [RFC2309].  Although there are questions
   and concerns about the effectiveness of active queue management
   (e.g., [MBDL99]), it is widely considered an improvement over tail-
   drop discard policies.

   One form of active queue management is the Random Early Detection
   (RED) algorithm [RED93], a family of related algorithms.  In one
   version of RED, an exponentially-weighted moving average of the queue
   depth is maintained:

      When this average queue depth is between a maximum threshold
      max_th and a minimum threshold min_th, the probability of packets
      that are dropped is proportional to the amount by which the
      average queue depth exceeds min_th.

Top      Up      ToC       Page 33 
      When this average queue depth is equal to max_th, the drop
      probability is equal to a configurable parameter max_p.

      When this average queue depth is greater than max_th, packets are
      always dropped.

   Numerous variants on RED appear in the literature, and there are
   other active queue management algorithms which claim various
   advantages over RED [GM02].

   With an active queue management algorithm, dropped packets become a
   feedback signal to trigger more appropriate congestion behavior by
   the TCPs in the end hosts.  Randomization of dropping tends to break
   up the observed tendency of TCP windows belonging to different TCP
   connections to become synchronized by correlated drops, and it also
   imposes a degree of fairness on those connections that implement TCP
   congestion avoidance properly.  Another important property of active
   queue management algorithms is that they attempt to keep average
   queue depths short while accommodating large short-term bursts.

   Since TCP neither knows nor cares whether congestive packet loss
   occurs at the IP layer or in a subnet, it may be advisable for
   subnets that perform queuing and discarding to consider implementing
   some form of active queue management.  This is especially true if
   large aggregates of TCP connections are likely to share the same
   queue.  However, active queue management may be less effective in the
   case of many queues carrying smaller aggregates of TCP connections,
   e.g., in an ATM switch that implements per-VC queuing.

   Note that the performance of active queue management algorithms is
   highly sensitive to settings of configurable parameters, and also to
   factors such as RTT [MBB00] [FB00].

   Some subnets, most notably ATM, perform segmentation and reassembly
   at the subnetwork edges.  Care should be taken here in designing
   discard policies.  If the subnet discards a fragment of an IP packet,
   then the remaining fragments become an unproductive load on the
   subnet that can markedly degrade end-to-end performance [RF95].
   Subnetworks should therefore attempt to discard these extra fragments
   whenever one of them must be discarded.  If the IP packet has already
   been partially forwarded when discarding becomes necessary, then
   every remaining fragment except the one marking the end of the IP
   packet should also be discarded.  For ATM subnets, this specifically
   means using Early Packet Discard and Partial Packet Discard [ATMFTM].

   Some subnets include flow control mechanisms that effectively require
   that the rate of traffic flows be shaped upon entry to the subnet.
   One example of such a subnet mechanism is in the ATM Available Bit

Top      Up      ToC       Page 34 
   rate (ABR) service category [ATMFTM].  Such flow control mechanisms
   have the effect of making the subnet nearly lossless by pushing
   congestion into the IP routers at the edges of the subnet.  In such a
   case, adequate buffering and discard policies are needed in these
   routers to deal with a subnet that appears to have varying bandwidth.
   Whether there is a benefit in this kind of flow control is
   controversial; there are numerous simulation and analytical studies
   that go both ways.  It appears that some of the issues leading to
   such different results include sensitivity to ABR parameters, use of
   binary rather than explicit rate feedback, use (or not) of per-VC
   queuing, and the specific ATM switch algorithms selected for the
   study.  Anecdotally, some large networks that used IP over ABR to
   carry TCP traffic have claimed it to be successful, but have
   published no results.

   Another possible approach to flow control in the subnet would be to
   work with TCP Explicit Congestion Notification (ECN) semantics
   [RFC3168] through utilizing explicit congestion indicators in subnet
   frames.  Routers at the edges of the subnet, rather than shaping,
   would set the explicit congestion bit in those IP packets that are
   received in subnet frames that have an ECN indication.  Nodes in the
   subnet would need to implement an active queue management protocol
   that marks subnet frames instead of dropping them.

   ECN is currently a proposed standard, but it is not yet widely

14.  Compression

   Application data compression is a function that can usually be
   omitted in the subnetwork.  The endpoints typically have more CPU and
   memory resources to run a compression algorithm and a better
   understanding of what is being compressed.  End-to-end compression
   benefits every network element in the path, while subnetwork-layer
   compression, by definition, benefits only a single subnetwork.

   Data presented to the subnetwork layer may already be in a compressed
   format (e.g., a JPEG file), compressed at the application layer
   (e.g., the optional "gzip", "compress", and "deflate" compression in
   HTTP/1.1 [RFC2616]), or compressed at the IP layer (the IP Payload
   Compression Protocol [RFC3173] supports DEFLATE [RFC2394] and LZS
   [RFC2395]).  Compression at the subnetwork edges is of no benefit for
   any of these cases.

   The subnetwork may also process data that has been encrypted by the
   application (OpenPGP [RFC2440] or S/MIME [RFC2633]), just above TCP
   (SSL, TLS [RFC2246]), or just above IP (IPsec ESP [RFC2406]).

Top      Up      ToC       Page 35 
   Ciphers generate high-entropy bit streams lacking any patterns that
   can be exploited by a compression algorithm.

   However, much data is still transmitted uncompressed over the
   Internet, so subnetwork compression may be beneficial.  Any
   subnetwork compression algorithm must not expand uncompressible data,
   e.g., data that has already been compressed or encrypted.

   We make a strong recommendation that subnetworks operating at low
   speed or with small MTUs compress IP and transport-level headers (TCP
   and UDP) using several header compression schemes developed within
   the IETF [RFC3150].  An uncompressed 40-byte TCP/IP header takes
   about 33 milliseconds to send at 9600 bps.  "VJ" TCP/IP header
   compression [RFC1144] compresses most headers to 3-5 bytes, reducing
   transmission time to several milliseconds on dialup modem links.
   This is especially beneficial for small, latency-sensitive packets in
   interactive sessions.

   Similarly, RTP compression schemes, such as CRTP [RFC2508] and ROHC
   [RFC3095], compress most IP/UDP/RTP headers to 1-4 bytes.  The
   resulting savings are especially significant when audio packets are
   kept small to minimize store-and-forward latency.

   Designers should consider the effect of the subnetwork error rate on
   the performance of header compression.  TCP ordinarily recovers from
   lost packets by retransmitting only those packets that were actually
   lost; packets arriving correctly after a packet loss are kept on a
   resequencing queue and do not need to be retransmitted.  In VJ TCP/IP
   [RFC1144] header compression, however, the receiver cannot explicitly
   notify a sender of data corruption and subsequent loss of
   synchronization between compressor and decompressor.  It relies
   instead on TCP retransmission to re-synchronize the decompressor.
   After a packet is lost, the decompressor must discard every
   subsequent packet, even if the subnetwork makes no further errors,
   until the sending TCP retransmits to re-synchronize the decompressor.
   This effect can substantially magnify the effect of subnetwork packet
   losses if the sending TCP window is large, as it will often be on a
   path with a large bandwidth*delay product [LRKOJ99].

   Alternate header compression schemes, such as those described in
   [RFC2507], include an explicit request for retransmission of an
   uncompressed packet to allow decompressor resynchronization without
   waiting for a TCP retransmission.  However, these schemes are not yet
   in widespread use.

   Both TCP header compression schemes do not compress widely-used TCP
   options such as selective acknowledgements (SACK).  Both fail to
   compress TCP traffic that makes use of explicit congestion

Top      Up      ToC       Page 36 
   notification (ECN).  Work is under way in the IETF ROHC WG to address
   these shortcomings in a ROHC header compression scheme for TCP
   [RFC3095] [RFC3096].

   The subnetwork error rate also is important for RTP header
   compression.  CRTP uses delta encoding, so a packet loss on the link
   causes uncertainty about the subsequent packets, which often must be
   discarded until the decompressor has notified the compressor and the
   compressor has sent re-synchronizing information.  This typically
   takes slightly more than the end-to-end path round-trip time.  For
   links that combine significant error rates with latencies that
   require multiple packets to be in flight at a time, this leads to
   significant error propagation, i.e., subsequent losses caused by an
   initial loss.

   For links that are both high-latency (multiple packets in flight from
   a typical RTP stream) and error-prone, RTP ROHC provides a more
   robust way of RTP header compression, at a cost of higher complexity
   at the compressor and decompressor.  For example, within a talk
   spurt, only extended losses of (depending on the mode chosen) 12-64
   packets typically cause error propagation.

(page 36 continued on part 3)

Next RFC Part