Tech-invite3GPPspaceIETFspace
96959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8337

Model-Based Metrics for Bulk Transport Capacity

Pages: 55
Experimental
Part 1 of 3 – Pages 1 to 16
None   None   Next

Top   ToC   RFC8337 - Page 1
Internet Engineering Task Force (IETF)                         M. Mathis
Request for Comments: 8337                                   Google, Inc
Category: Experimental                                         A. Morton
ISSN: 2070-1721                                                AT&T Labs
                                                              March 2018


            Model-Based Metrics for Bulk Transport Capacity

Abstract

This document introduces a new class of Model-Based Metrics designed to assess if a complete Internet path can be expected to meet a predefined Target Transport Performance by applying a suite of IP diagnostic tests to successive subpaths. The subpath-at-a-time tests can be robustly applied to critical infrastructure, such as network interconnections or even individual devices, to accurately detect if any part of the infrastructure will prevent paths traversing it from meeting the Target Transport Performance. Model-Based Metrics rely on mathematical models to specify a Targeted IP Diagnostic Suite, a set of IP diagnostic tests designed to assess whether common transport protocols can be expected to meet a predetermined Target Transport Performance over an Internet path. For Bulk Transport Capacity, the IP diagnostics are built using test streams and statistical criteria for evaluating the packet transfer that mimic TCP over the complete path. The temporal structure of the test stream (e.g., bursts) mimics TCP or other transport protocols carrying bulk data over a long path. However, they are constructed to be independent of the details of the subpath under test, end systems, or applications. Likewise, the success criteria evaluates the packet transfer statistics of the subpath against criteria determined by protocol performance models applied to the Target Transport Performance of the complete path. The success criteria also does not depend on the details of the subpath, end systems, or applications.
Top   ToC   RFC8337 - Page 2
Status of This Memo

   This document is not an Internet Standards Track specification; it is
   published for examination, experimental implementation, and
   evaluation.

   This document defines an Experimental Protocol for the Internet
   community.  This document is a product of the Internet Engineering
   Task Force (IETF).  It represents the consensus of the IETF
   community.  It has received public review and has been approved for
   publication by the Internet Engineering Steering Group (IESG).  Not
   all documents approved by the IESG are candidates for any level of
   Internet Standard; see Section 2 of RFC 7841.

   Information about the current status of this document, any errata,
   and how to provide feedback on it may be obtained at
   https://www.rfc-editor.org/info/rfc8337.

Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.
Top   ToC   RFC8337 - Page 3

Table of Contents

1. Introduction ....................................................4 2. Overview ........................................................5 3. Terminology .....................................................8 3.1. General Terminology ........................................8 3.2. Terminology about Paths ...................................10 3.3. Properties ................................................11 3.4. Basic Parameters ..........................................12 3.5. Ancillary Parameters ......................................13 3.6. Temporal Patterns for Test Streams ........................14 3.7. Tests .....................................................15 4. Background .....................................................16 4.1. TCP Properties ............................................18 4.2. Diagnostic Approach .......................................20 4.3. New Requirements Relative to RFC 2330 .....................21 5. Common Models and Parameters ...................................22 5.1. Target End-to-End Parameters ..............................22 5.2. Common Model Calculations .................................22 5.3. Parameter Derating ........................................23 5.4. Test Preconditions ........................................24 6. Generating Test Streams ........................................24 6.1. Mimicking Slowstart .......................................25 6.2. Constant Window Pseudo CBR ................................27 6.3. Scanned Window Pseudo CBR .................................28 6.4. Concurrent or Channelized Testing .........................28 7. Interpreting the Results .......................................29 7.1. Test Outcomes .............................................29 7.2. Statistical Criteria for Estimating run_length ............31 7.3. Reordering Tolerance ......................................33 8. IP Diagnostic Tests ............................................34 8.1. Basic Data Rate and Packet Transfer Tests .................34 8.1.1. Delivery Statistics at Paced Full Data Rate ........35 8.1.2. Delivery Statistics at Full Data Windowed Rate .....35 8.1.3. Background Packet Transfer Statistics Tests ........35 8.2. Standing Queue Tests ......................................36 8.2.1. Congestion Avoidance ...............................37 8.2.2. Bufferbloat ........................................37 8.2.3. Non-excessive Loss .................................38 8.2.4. Duplex Self-Interference ...........................38 8.3. Slowstart Tests ...........................................39 8.3.1. Full Window Slowstart Test .........................39 8.3.2. Slowstart AQM Test .................................39 8.4. Sender Rate Burst Tests ...................................40 8.5. Combined and Implicit Tests ...............................41 8.5.1. Sustained Full-Rate Bursts Test ....................41 8.5.2. Passive Measurements ...............................42
Top   ToC   RFC8337 - Page 4
   9. Example ........................................................43
      9.1. Observations about Applicability ..........................44
   10. Validation ....................................................45
   11. Security Considerations .......................................46
   12. IANA Considerations ...........................................47
   13. Informative References ........................................47
   Appendix A.  Model Derivations ....................................52
     A.1.  Queueless Reno ............................................52
   Appendix B.  The Effects of ACK Scheduling ........................53
   Acknowledgments ...................................................55
   Authors' Addresses ................................................55

1. Introduction

Model-Based Metrics (MBM) rely on peer-reviewed mathematical models to specify a Targeted IP Diagnostic Suite (TIDS), a set of IP diagnostic tests designed to assess whether common transport protocols can be expected to meet a predetermined Target Transport Performance over an Internet path. This document describes the modeling framework to derive the test parameters for assessing an Internet path's ability to support a predetermined Bulk Transport Capacity. Each test in TIDS measures some aspect of IP packet transfer needed to meet the Target Transport Performance. For Bulk Transport Capacity, the TIDS includes IP diagnostic tests to verify that there is sufficient IP capacity (data rate), sufficient queue space at bottlenecks to absorb and deliver typical transport bursts, low enough background packet loss ratio to not interfere with congestion control, and other properties described below. Unlike typical IP Performance Metrics (IPPM) that yield measures of network properties, Model-Based Metrics nominally yield pass/fail evaluations of the ability of standard transport protocols to meet the specific performance objective over some network path. In most cases, the IP diagnostic tests can be implemented by combining existing IPPM metrics with additional controls for generating test streams having a specified temporal structure (bursts or standing queues caused by constant bit rate streams, etc.) and statistical criteria for evaluating packet transfer. The temporal structure of the test streams mimics transport protocol behavior over the complete path; the statistical criteria models the transport protocol's response to less-than-ideal IP packet transfer. In control theory terms, the tests are "open loop". Note that running a test requires the coordinated activity of sending and receiving measurement points.
Top   ToC   RFC8337 - Page 5
   This document addresses Bulk Transport Capacity.  It describes an
   alternative to the approach presented in "A Framework for Defining
   Empirical Bulk Transfer Capacity Metrics" [RFC3148].  Other Model-
   Based Metrics may cover other applications and transports, such as
   Voice over IP (VoIP) over UDP, RTP, and new transport protocols.

   This document assumes a traditional Reno TCP-style, self-clocked,
   window-controlled transport protocol that uses packet loss and
   Explicit Congestion Notification (ECN) Congestion Experienced (CE)
   marks for congestion feedback.  There are currently some experimental
   protocols and congestion control algorithms that are rate based or
   otherwise fall outside of these assumptions.  In the future, these
   new protocols and algorithms may call for revised models.

   The MBM approach, i.e., mapping Target Transport Performance to a
   Targeted IP Diagnostic Suite (TIDS) of IP tests, solves some
   intrinsic problems with using TCP or other throughput-maximizing
   protocols for measurement.  In particular, all throughput-maximizing
   protocols (especially TCP congestion control) cause some level of
   congestion in order to detect when they have reached the available
   capacity limitation of the network.  This self-inflicted congestion
   obscures the network properties of interest and introduces non-linear
   dynamic equilibrium behaviors that make any resulting measurements
   useless as metrics because they have no predictive value for
   conditions or paths different from that of the measurement itself.
   In order to prevent these effects, it is necessary to avoid the
   effects of TCP congestion control in the measurement method.  These
   issues are discussed at length in Section 4.  Readers who are
   unfamiliar with basic properties of TCP and TCP-like congestion
   control may find it easier to start at Section 4 or 4.1.

   A Targeted IP Diagnostic Suite does not have such difficulties.  IP
   diagnostics can be constructed such that they make strong statistical
   statements about path properties that are independent of measurement
   details, such as vantage and choice of measurement points.

2. Overview

This document describes a modeling framework for deriving a Targeted IP Diagnostic Suite from a predetermined Target Transport Performance. It is not a complete specification and relies on other standards documents to define important details such as packet type-P selection, sampling techniques, vantage selection, etc. Fully Specified Targeted IP Diagnostic Suites (FSTIDSs) define all of these details. A Targeted IP Diagnostic Suite (TIDS) refers to the subset of such a specification that is in scope for this document. This terminology is further defined in Section 3.
Top   ToC   RFC8337 - Page 6
   Section 4 describes some key aspects of TCP behavior and what they
   imply about the requirements for IP packet transfer.  Most of the IP
   diagnostic tests needed to confirm that the path meets these
   properties can be built on existing IPPM metrics, with the addition
   of statistical criteria for evaluating packet transfer and, in a few
   cases, new mechanisms to implement the required temporal structure.
   (One group of tests, the standing queue tests described in
   Section 8.2, don't correspond to existing IPPM metrics, but suitable
   new IPPM metrics can be patterned after the existing definitions.)

   Figure 1 shows the MBM modeling and measurement framework.  The
   Target Transport Performance at the top of the figure is determined
   by the needs of the user or application, which are outside the scope
   of this document.  For Bulk Transport Capacity, the main performance
   parameter of interest is the Target Data Rate.  However, since TCP's
   ability to compensate for less-than-ideal network conditions is
   fundamentally affected by the Round-Trip Time (RTT) and the Maximum
   Transmission Unit (MTU) of the complete path, these parameters must
   also be specified in advance based on knowledge about the intended
   application setting.  They may reflect a specific application over a
   real path through the Internet or an idealized application and
   hypothetical path representing a typical user community.  Section 5
   describes the common parameters and models derived from the Target
   Transport Performance.
Top   ToC   RFC8337 - Page 7
                      Target Transport Performance
            (Target Data Rate, Target RTT, and Target MTU)
                                   |
                           ________V_________
                           |  mathematical  |
                           |     models     |
                           |                |
                           ------------------
          Traffic parameters |            | Statistical criteria
                             |            |
                      _______V____________V____Targeted IP____
                     |       |   * * *    | Diagnostic Suite  |
                _____|_______V____________V________________   |
              __|____________V____________V______________  |  |
              |           IP diagnostic tests            | |  |
              |              |            |              | |  |
              | _____________V__        __V____________  | |  |
              | |   traffic    |        |   Delivery  |  | |  |
              | |   pattern    |        |  Evaluation |  | |  |
              | |  generation  |        |             |  | |  |
              | -------v--------        ------^--------  | |  |
              |   |    v    test stream via   ^      |   | |--
              |   |  -->======================>--    |   | |
              |   |       subpath under test         |   |-
              ----V----------------------------------V--- |
                  | |  |                             | |  |
                  V V  V                             V V  V
              fail/inconclusive            pass/fail/inconclusive
          (traffic generation status)           (test result)

                   Figure 1: Overall Modeling Framework

   Mathematical TCP models are used to determine traffic parameters and
   subsequently to design traffic patterns that mimic TCP (which has
   burst characteristics at multiple time scales) or other transport
   protocols delivering bulk data and operating at the Target Data Rate,
   MTU, and RTT over a full range of conditions.  Using the techniques
   described in Section 6, the traffic patterns are generated based on
   the three Target parameters of the complete path (Target Data Rate,
   Target RTT, and Target MTU), independent of the properties of
   individual subpaths.  As much as possible, the test streams are
   generated deterministically (precomputed) to minimize the extent to
   which test methodology, measurement points, measurement vantage, or
   path partitioning affect the details of the measurement traffic.

   Section 7 describes packet transfer statistics and methods to test
   against the statistical criteria provided by the mathematical models.
   Since the statistical criteria typically apply to the complete path
Top   ToC   RFC8337 - Page 8
   (a composition of subpaths) [RFC6049], in situ testing requires that
   the end-to-end statistical criteria be apportioned as separate
   criteria for each subpath.  Subpaths that are expected to be
   bottlenecks would then be permitted to contribute a larger fraction
   of the end-to-end packet loss budget.  In compensation, subpaths that
   are not expected to exhibit bottlenecks must be constrained to
   contribute less packet loss.  Thus, the statistical criteria for each
   subpath in each test of a TIDS is an apportioned share of the end-to-
   end statistical criteria for the complete path that was determined by
   the mathematical model.

   Section 8 describes the suite of individual tests needed to verify
   all of the required IP delivery properties.  A subpath passes if and
   only if all of the individual IP diagnostic tests pass.  Any subpath
   that fails any test indicates that some users are likely to fail to
   attain their Target Transport Performance under some conditions.  In
   addition to passing or failing, a test can be deemed inconclusive for
   a number of reasons, including the following: the precomputed traffic
   pattern was not accurately generated, the measurement results were
   not statistically significant, the test failed to meet some required
   test preconditions, etc.  If all tests pass but some are
   inconclusive, then the entire suite is deemed to be inconclusive.

   In Section 9, we present an example TIDS that might be representative
   of High Definition (HD) video and illustrate how Model-Based Metrics
   can be used to address difficult measurement situations, such as
   confirming that inter-carrier exchanges have sufficient performance
   and capacity to deliver HD video between ISPs.

   Since there is some uncertainty in the modeling process, Section 10
   describes a validation procedure to diagnose and minimize false
   positive and false negative results.

3. Terminology

Terms containing underscores (rather than spaces) appear in equations and typically have algorithmic definitions.

3.1. General Terminology

Target: A general term for any parameter specified by or derived from the user's application or transport performance requirements. Target Transport Performance: Application or transport performance target values for the complete path. For Bulk Transport Capacity defined in this document, the Target Transport Performance includes the Target Data Rate, Target RTT, and Target MTU as described below.
Top   ToC   RFC8337 - Page 9
   Target Data Rate:  The specified application data rate required for
      an application's proper operation.  Conventional Bulk Transport
      Capacity (BTC) metrics are focused on the Target Data Rate;
      however, these metrics have little or no predictive value because
      they do not consider the effects of the other two parameters of
      the Target Transport Performance -- the RTT and MTU of the
      complete paths.

   Target RTT (Round-Trip Time):  The specified baseline (minimum) RTT
      of the longest complete path over which the user expects to be
      able to meet the target performance.  TCP and other transport
      protocol's ability to compensate for path problems is generally
      proportional to the number of round trips per second.  The Target
      RTT determines both key parameters of the traffic patterns (e.g.,
      burst sizes) and the thresholds on acceptable IP packet transfer
      statistics.  The Target RTT must be specified considering
      appropriate packets sizes: MTU-sized packets on the forward path
      and ACK-sized packets (typically, header_overhead) on the return
      path.  Note that Target RTT is specified and not measured; MBM
      measurements derived for a given target_RTT will be applicable to
      any path with a smaller RTT.

   Target MTU (Maximum Transmission Unit):  The specified maximum MTU
      supported by the complete path over which the application expects
      to meet the target performance.  In this document, we assume a
      1500-byte MTU unless otherwise specified.  If a subpath has a
      smaller MTU, then it becomes the Target MTU for the complete path,
      and all model calculations and subpath tests must use the same
      smaller MTU.

   Targeted IP Diagnostic Suite (TIDS):  A set of IP diagnostic tests
      designed to determine if an otherwise ideal complete path
      containing the subpath under test can sustain flows at a specific
      target_data_rate using packets with a size of target_MTU when the
      RTT of the complete path is target_RTT.

   Fully Specified Targeted IP Diagnostic Suite (FSTIDS):  A TIDS
      together with additional specifications such as measurement packet
      type ("type-p" [RFC2330]) that are out of scope for this document
      and need to be drawn from other standards documents.

   Bulk Transport Capacity (BTC):  Bulk Transport Capacity metrics
      evaluate an Internet path's ability to carry bulk data, such as
      large files, streaming (non-real-time) video, and, under some
      conditions, web images and other content.  Prior efforts to define
      BTC metrics have been based on [RFC3148], which predates our
      understanding of TCP and the requirements described in Section 4.
      In general, "Bulk Transport" indicates that performance is
Top   ToC   RFC8337 - Page 10
      determined by the interplay between the network, cross traffic,
      and congestion control in the transport protocol.  It excludes
      situations where performance is dominated by the RTT alone (e.g.,
      transactions) or bottlenecks elsewhere, such as in the application
      itself.

   IP diagnostic tests:  Measurements or diagnostics to determine if
      packet transfer statistics meet some precomputed target.

   traffic patterns:  The temporal patterns or burstiness of traffic
      generated by applications over transport protocols such as TCP.
      There are several mechanisms that cause bursts at various
      timescales as described in Section 4.1.  Our goal here is to mimic
      the range of common patterns (burst sizes, rates, etc.), without
      tying our applicability to specific applications, implementations,
      or technologies, which are sure to become stale.

   Explicit Congestion Notification (ECN):  See [RFC3168].

   packet transfer statistics:  Raw, detailed, or summary statistics
      about packet transfer properties of the IP layer including packet
      losses, ECN Congestion Experienced (CE) marks, reordering, or any
      other properties that may be germane to transport performance.

   packet loss ratio:  As defined in [RFC7680].

   apportioned:  To divide and allocate, for example, budgeting packet
      loss across multiple subpaths such that the losses will accumulate
      to less than a specified end-to-end loss ratio.  Apportioning
      metrics is essentially the inverse of the process described in
      [RFC5835].

   open loop:  A control theory term used to describe a class of
      techniques where systems that naturally exhibit circular
      dependencies can be analyzed by suppressing some of the
      dependencies, such that the resulting dependency graph is acyclic.

3.2. Terminology about Paths

See [RFC2330] and [RFC7398] for existing terms and definitions. data sender: Host sending data and receiving ACKs. data receiver: Host receiving data and sending ACKs. complete path: The end-to-end path from the data sender to the data receiver.
Top   ToC   RFC8337 - Page 11
   subpath:  A portion of the complete path.  Note that there is no
      requirement that subpaths be non-overlapping.  A subpath can be as
      small as a single device, link, or interface.

   measurement point:  Measurement points as described in [RFC7398].

   test path:  A path between two measurement points that includes a
      subpath of the complete path under test.  If the measurement
      points are off path, the test path may include "test leads"
      between the measurement points and the subpath.

   dominant bottleneck:  The bottleneck that generally determines most
      packet transfer statistics for the entire path.  It typically
      determines a flow's self-clock timing, packet loss, and ECN CE
      marking rate, with other potential bottlenecks having less effect
      on the packet transfer statistics.  See Section 4.1 on TCP
      properties.

   front path:  The subpath from the data sender to the dominant
      bottleneck.

   back path:  The subpath from the dominant bottleneck to the receiver.

   return path:  The path taken by the ACKs from the data receiver to
      the data sender.

   cross traffic:  Other, potentially interfering, traffic competing for
      network resources (such as bandwidth and/or queue capacity).

3.3. Properties

The following properties are determined by the complete path and application. These are described in more detail in Section 5.1. Application Data Rate: General term for the data rate as seen by the application above the transport layer in bytes per second. This is the payload data rate and explicitly excludes transport-level and lower-level headers (TCP/IP or other protocols), retransmissions, and other overhead that is not part of the total quantity of data delivered to the application. IP rate: The actual number of IP-layer bytes delivered through a subpath, per unit time, including TCP and IP headers, retransmits, and other TCP/IP overhead. This is the same as IP-type-P Link Usage in [RFC5136].
Top   ToC   RFC8337 - Page 12
   IP capacity:  The maximum number of IP-layer bytes that can be
      transmitted through a subpath, per unit time, including TCP and IP
      headers, retransmits, and other TCP/IP overhead.  This is the same
      as IP-type-P Link Capacity in [RFC5136].

   bottleneck IP capacity:  The IP capacity of the dominant bottleneck
      in the forward path.  All throughput-maximizing protocols estimate
      this capacity by observing the IP rate delivered through the
      bottleneck.  Most protocols derive their self-clocks from the
      timing of this data.  See Section 4.1 and Appendix B for more
      details.

   implied bottleneck IP capacity:  The bottleneck IP capacity implied
      by the ACKs returning from the receiver.  It is determined by
      looking at how much application data the ACK stream at the sender
      reports as delivered to the data receiver per unit time at various
      timescales.  If the return path is thinning, batching, or
      otherwise altering the ACK timing, the implied bottleneck IP
      capacity over short timescales might be substantially larger than
      the bottleneck IP capacity averaged over a full RTT.  Since TCP
      derives its clock from the data delivered through the bottleneck,
      the front path must have sufficient buffering to absorb any data
      bursts at the dimensions (size and IP rate) implied by the ACK
      stream, which are potentially doubled during slowstart.  If the
      return path is not altering the ACK stream, then the implied
      bottleneck IP capacity will be the same as the bottleneck IP
      capacity.  See Section 4.1 and Appendix B for more details.

   sender interface rate:  The IP rate that corresponds to the IP
      capacity of the data sender's interface.  Due to sender efficiency
      algorithms, including technologies such as TCP segmentation
      offload (TSO), nearly all modern servers deliver data in bursts at
      full interface link rate.  Today, 1 or 10 Gb/s are typical.

   header_overhead:  The IP and TCP header sizes, which are the portion
      of each MTU not available for carrying application payload.
      Without loss of generality, this is assumed to be the size for
      returning acknowledgments (ACKs).  For TCP, the Maximum Segment
      Size (MSS) is the Target MTU minus the header_overhead.

3.4. Basic Parameters

Basic parameters common to models and subpath tests are defined here. Formulas for target_window_size and target_run_length appear in Section 5.2. Note that these are mixed between application transport performance (excludes headers) and IP performance (includes TCP headers and retransmissions as part of the IP payload).
Top   ToC   RFC8337 - Page 13
   Network power:  The observed data rate divided by the observed RTT.
      Network power indicates how effectively a transport protocol is
      filling a network.

   Window [size]:  The total quantity of data carried by packets
      in-flight plus the data represented by ACKs circulating in the
      network is referred to as the window.  See Section 4.1.  Sometimes
      used with other qualifiers (congestion window (cwnd) or receiver
      window) to indicate which mechanism is controlling the window.

   pipe size:  A general term for the number of packets needed in flight
      (the window size) to exactly fill a network path or subpath.  It
      corresponds to the window size, which maximizes network power.  It
      is often used with additional qualifiers to specify which path,
      under what conditions, etc.

   target_window_size:  The average number of packets in flight (the
      window size) needed to meet the Target Data Rate for the specified
      Target RTT and Target MTU.  It implies the scale of the bursts
      that the network might experience.

   run length:  A general term for the observed, measured, or specified
      number of packets that are (expected to be) delivered between
      losses or ECN CE marks.  Nominally, it is one over the sum of the
      loss and ECN CE marking probabilities, if they are independently
      and identically distributed.

   target_run_length:  The target_run_length is an estimate of the
      minimum number of non-congestion marked packets needed between
      losses or ECN CE marks necessary to attain the target_data_rate
      over a path with the specified target_RTT and target_MTU, as
      computed by a mathematical model of TCP congestion control.  A
      reference calculation is shown in Section 5.2 and alternatives in
      Appendix A.

   reference target_run_length:  target_run_length computed precisely by
      the method in Section 5.2.  This is likely to be slightly more
      conservative than required by modern TCP implementations.

3.5. Ancillary Parameters

The following ancillary parameters are used for some tests: derating: Under some conditions, the standard models are too conservative. The modeling framework permits some latitude in relaxing or "derating" some test parameters, as described in Section 5.3, in exchange for a more stringent TIDS validation
Top   ToC   RFC8337 - Page 14
      procedures, described in Section 10.  Models can be derated by
      including a multiplicative derating factor to make tests less
      stringent.

   subpath_IP_capacity:  The IP capacity of a specific subpath.

   test path:  A subpath of a complete path under test.

   test_path_RTT:  The RTT observed between two measurement points using
      packet sizes that are consistent with the transport protocol.
      This is generally MTU-sized packets of the forward path and
      packets with a size of header_overhead on the return path.

   test_path_pipe:  The pipe size of a test path.  Nominally, it is the
      test_path_RTT times the test path IP_capacity.

   test_window:  The smallest window sufficient to meet or exceed the
      target_rate when operating with a pure self-clock over a test
      path.  The test_window is typically calculated as follows (but see
      the discussion in Appendix B about the effects of channel
      scheduling on RTT):

      ceiling(target_data_rate * test_path_RTT / (target_MTU -
      header_overhead))

      On some test paths, the test_window may need to be adjusted
      slightly to compensate for the RTT being inflated by the devices
      that schedule packets.

3.6. Temporal Patterns for Test Streams

The terminology below is used to define temporal patterns for test streams. These patterns are designed to mimic TCP behavior, as described in Section 4.1. packet headway: Time interval between packets, specified from the start of one to the start of the next. For example, if packets are sent with a 1 ms headway, there will be exactly 1000 packets per second. burst headway: Time interval between bursts, specified from the start of the first packet of one burst to the start of the first packet of the next burst. For example, if 4 packet bursts are sent with a 1 ms burst headway, there will be exactly 4000 packets per second. paced single packets: Individual packets sent at the specified rate or packet headway.
Top   ToC   RFC8337 - Page 15
   paced bursts:  Bursts on a timer.  Specify any 3 of the following:
      average data rate, packet size, burst size (number of packets),
      and burst headway (burst start to start).  By default, the bursts
      are assumed to occur at full sender interface rate, such that the
      packet headway within each burst is the minimum supported by the
      sender's interface.  Under some conditions, it is useful to
      explicitly specify the packet headway within each burst.

   slowstart rate:  Paced bursts of four packets each at an average data
      rate equal to twice the implied bottleneck IP capacity (but not
      more than the sender interface rate).  This mimics TCP slowstart.
      This is a two-level burst pattern described in more detail in
      Section 6.1.  If the implied bottleneck IP capacity is more than
      half of the sender interface rate, the slowstart rate becomes the
      sender interface rate.

   slowstart burst:  A specified number of packets in a two-level burst
      pattern that resembles slowstart.  This mimics one round of TCP
      slowstart.

   repeated slowstart bursts:  Slowstart bursts repeated once per
      target_RTT.  For TCP, each burst would be twice as large as the
      prior burst, and the sequence would end at the first ECN CE mark
      or lost packet.  For measurement, all slowstart bursts would be
      the same size (nominally, target_window_size but other sizes might
      be specified), and the ECN CE marks and lost packets are counted.

3.7. Tests

The tests described in this document can be grouped according to their applicability. Capacity tests: Capacity tests determine if a network subpath has sufficient capacity to deliver the Target Transport Performance. As long as the test stream is within the proper envelope for the Target Transport Performance, the average packet losses or ECN CE marks must be below the statistical criteria computed by the model. As such, capacity tests reflect parameters that can transition from passing to failing as a consequence of cross traffic, additional presented load, or the actions of other network users. By definition, capacity tests also consume significant network resources (data capacity and/or queue buffer space), and the test schedules must be balanced by their cost. Monitoring tests: Monitoring tests are designed to capture the most important aspects of a capacity test without presenting excessive ongoing load themselves. As such, they may miss some details of
Top   ToC   RFC8337 - Page 16
      the network's performance but can serve as a useful reduced-cost
      proxy for a capacity test, for example, to support continuous
      production network monitoring.

   Engineering tests:  Engineering tests evaluate how network algorithms
      (such as Active Queue Management (AQM) and channel allocation)
      interact with TCP-style self-clocked protocols and adaptive
      congestion control based on packet loss and ECN CE marks.  These
      tests are likely to have complicated interactions with cross
      traffic and, under some conditions, can be inversely sensitive to
      load.  For example, a test to verify that an AQM algorithm causes
      ECN CE marks or packet drops early enough to limit queue occupancy
      may experience a false pass result in the presence of cross
      traffic.  It is important that engineering tests be performed
      under a wide range of conditions, including both in situ and bench
      testing, and over a wide variety of load conditions.  Ongoing
      monitoring is less likely to be useful for engineering tests,
      although sparse in situ testing might be appropriate.



(page 16 continued on part 2)

Next Section