tech-invite   World Map     

IETF     RFCs     Groups     SIP     ABNFs    |    3GPP     Specs     Gloss.     Arch.     IMS     UICC    |    Misc.    |    search     info

RFC 4782


Quick-Start for TCP and IP

Part 3 of 4, p. 37 to 58
Prev RFC Part       Next RFC Part


prevText      Top      Up      ToC       Page 37 
8.  Using Quick-Start

8.1.  Determining the Rate to Request

   As discussed in [SAF06], the data sender does not necessarily have
   information about the size of the data transfer at connection
   initiation; for example, in request-response protocols such as HTTP,
   the server doesn't know the size or name of the requested object
   during connection initiation.  [SAF06] explores some of the
   performance implications of overly large Quick-Start Requests, and
   discusses heuristics that end-nodes could use to size their requests
   appropriately.  For example, the sender might have information about
   the bandwidth of the last-mile hop, the size of the local socket
   buffer, or of the TCP receive window, and could use this information
   in determining the rate to request.  Web servers that mostly have
   small objects to transfer might decide not to use Quick-Start at all,
   since Quick-Start would be of little benefit to them.

   Quick-Start will be more effective if Quick-Start Requests are not
   larger than necessary; every Quick-Start Request that is approved but
   not used (or not fully used) takes away from the bandwidth pool
   available for granting successive Quick-Start Requests.

8.2.  Deciding the Permitted Rate Request at a Router

   In this section, we briefly outline how a router might decide whether
   or not to approve a Quick-Start Request.  The router should ask the
   following questions:

   * Has the router's output link been underutilized for some time
     (e.g., several seconds)?

   * Would the output link remain underutilized if the arrival rate were
     to increase by the aggregate rate requests that the router has
     approved over the last fraction of a second?

Top      Up      ToC       Page 38 
   In order to answer the last question, the router must have some
   knowledge of the available bandwidth on the output link and of the
   Quick-Start bandwidth that could arrive due to recently approved
   Quick-Start Requests.  In this way, if an underutilized router
   experiences a flood of Quick-Start Requests, the router can begin to
   deny Quick-Start Requests while the output link is still

   A simple way for the router to keep track of the potential bandwidth
   from recently approved requests is to maintain two counters: one for
   the total aggregate Rate Requests that have been approved in the
   current time interval [T1, T2], and one for the total aggregate Rate
   Requests approved over a previous time interval [T0, T1].  However,
   this document doesn't specify router algorithms for approving Quick-
   Start Requests, or make requirements for the appropriate time
   intervals for remembering the aggregate approved Quick-Start
   bandwidth.  A possible router algorithm is given in Appendix E, and
   more discussion of these issues is available in [SAF06].

   * If the router's output link has been underutilized and the
     aggregate of the Quick-Start Request Rate options granted is low
     enough to prevent a near-term bandwidth shortage, then the router
     could approve the Quick-Start Request.

   Section 10.2 discusses some of the implementation issues in
   processing Quick-Start Requests at routers.  [SAF06] discusses the
   range of possible Quick-Start algorithms at the router for deciding
   whether to approve a Quick-Start Request.  In order to explore the
   limits of the possible functionality at routers, [SAF06] also
   discusses Extreme Quick-Start mechanisms at routers, where the router
   would keep per-flow state concerning approved Quick-Start requests.

9.  Evaluation of Quick-Start

9.1.  Benefits of Quick-Start

   The main benefit of Quick-Start is the faster start-up for the
   transport connection itself.  For a small TCP transfer of one to five
   packets, Quick-Start is probably of very little benefit;  at best, it
   might shorten the connection lifetime from three to two round-trip
   times (including the round-trip time for connection establishment).
   Similarly, for a very large transfer, where the slow-start phase
   would have been only a small fraction of the connection lifetime,
   Quick-Start would be of limited benefit.  Quick-Start would not
   significantly shorten the connection lifetime, but it might eliminate
   or at least shorten the start-up phase.  However, for moderate-sized
   connections in a well-provisioned environment, Quick-Start could
   possibly allow the entire transfer of M packets to be completed in

Top      Up      ToC       Page 39 
   one round-trip time (after the initial round-trip time for the SYN
   exchange), instead of the log_2(M)-2 round-trip times that it would
   normally take for the data transfer, in an uncongested environments
   (assuming an initial window of four packets).

9.2.  Costs of Quick-Start

   This section discusses the costs of Quick-Start for the connection
   and for the routers along the path.

   The cost of having a Quick-Start Request packet dropped:
   Measurement studies cited earlier [MAF04] suggest that on a wide
   range of paths in the Internet, TCP SYN packets containing unknown IP
   options will be dropped.  Thus, for the sender one risk in using
   Quick-Start is that the packet carrying the Quick-Start Request could
   be dropped in the network.  It is particularly costly to the sender
   when a TCP SYN packet is dropped, because in this case the sender
   should wait for an RTO of three seconds before re-sending the SYN
   packet, as specified in Section 4.7.2.

   The cost of having a Quick-Start data packet dropped:
   Another risk for the sender in using Quick-Start lies in the
   possibility of suffering from congestion-related losses of the
   Quick-Start data packets.  This should be an unlikely situation
   because routers are expected to approve Quick-Start Requests only
   when they are significantly underutilized.  However, a transient
   increase in cross-traffic in one of the routers, a sudden decrease in
   available bandwidth on one of the links, or congestion at a non-IP
   queue could result in packet losses even when the Quick-Start Request
   was approved by all of the routers along the path.  If a Quick-Start
   packet is dropped, then the sender reverts to the congestion control
   mechanisms it would have used if the Quick-Start Request had not been
   approved, so the performance cost to the connection of having a
   Quick-Start packet dropped is small, compared to the performance
   without Quick-Start.  (On the other hand, the performance difference
   between Quick-Start with a Quick-Start packet dropped and Quick-
   Start with no Quick-Start packet dropped can be considerable.)

   Added complexity at routers:
   The main cost of Quick-Start at routers concerns the costs of added
   complexity.  The added complexity at the end-points is moderate, and
   might easily be outweighed by the benefit of Quick-Start to the end
   hosts.  The added complexity at the routers is also somewhat
   moderate; it involves estimating the unused bandwidth on the output
   link over the last several seconds, processing the Quick-Start
   request, and keeping a counter of the aggregate Quick-Start rate
   approved over the last fraction of a second.  However, this added
   complexity at routers adds to the development cycle, and could

Top      Up      ToC       Page 40 
   prevent the addition of other competing functionality to routers.
   Thus, careful thought would have to be given to the addition of
   Quick-Start to IP.

   The slow path in routers:
   Another drawback of Quick-Start is that packets containing the
   Quick-Start Request message might not take the fast path in routers,
   particularly in the beginning of Quick-Start's deployment in the
   Internet.  This would mean some extra delay for the end hosts, and
   extra processing burden for the routers.  However, as discussed in
   Sections 4.1 and 4.7, not all packets would carry the Quick-Start
   option.  In addition, for the underutilized links where Quick-Start
   Requests could actually be approved, or in typical environments where
   most of the packets belong to large flows, the burden of the Quick-
   Start Option on routers would be considerably reduced.  Nevertheless,
   it is still conceivable, in the worst case, that many packets would
   carry Quick-Start Requests; this could slow down the processing of
   Quick-Start packets in routers considerably.  As discussed in Section
   9.6, routers can easily protect against this by enforcing a limit on
   the rate at which Quick-Start Requests will be considered.  [RW03]
   and [RW04] contain measurements of the impact of IP Option Processing
   on packet round-trip times.

   Multiple paths:

   One limitation of Quick-Start is that it presumes that the data
   packets of a connection will follow the same path as the Quick-Start
   request packet.  If this is not the case, then the connection could
   be sending the Quick-Start packets, at the approved rate, along a
   path that was already congested, or that became congested as a result
   of this connection.  Thus, Quick-Start could give poor performance
   when there is a routing change immediately after the Quick-Start
   Request is approved, and the Quick-Start data packets follow a
   different path from that of the original Quick-Start Request.  This
   is, however, similar to what would happen for a connection with
   sufficient data, if the connection's path was changed in the middle
   of the connection, which had already established the allowed initial

   As specified in Section 3.3, a router that uses multipath routing for
   packets within a single connection must not approve a Quick-Start
   Request.  Quick-Start would not perform robustly in an environment
   with multipath routing, where different packets in a connection
   routinely follow different paths.  In such an environment, the
   Quick-Start Request and some fraction of the packets in the
   connection might take an underutilized path, while the rest of the
   packets take an alternate, congested path.

Top      Up      ToC       Page 41 
   Non-IP queues:
   A problem of any mechanism for feedback from routers at the IP level
   is that there can be queues and bottlenecks in the end-to-end path
   that are not in IP-level routers.  As an example, these include
   queues in layer-two Ethernet or ATM networks.  One possibility would
   be that an IP-level router adjacent to such a non-IP queue or
   bottleneck would be configured to reject Quick-Start Requests if that
   was appropriate.  One would hope that, in general, IP networks are
   configured so that non-IP queues between IP routers do not end up
   being the congested bottlenecks.

9.3.  Quick-Start with QoS-Enabled Traffic

   The discussion in this document has largely been of Quick-Start with
   default, best-effort traffic.  However, Quick-Start could also be
   used by traffic using some form of differentiated services, and
   routers could take the traffic class into account when deciding
   whether or not to grant the Quick-Start Request.  We don't address
   this context further in this paper, since it is orthogonal to the
   specification of Quick-Start.

   Routers are also free to take into account their own priority
   classifications in processing Quick-Start Requests.

9.4.  Protection against Misbehaving Nodes

   In this section, we discuss the protection against senders,
   receivers, or colluding routers or middleboxes lying about the
   Quick-Start Request.

9.4.1.  Misbehaving Senders

   A transport sender could try to transmit data at a higher rate than
   that approved in the Quick-Start Request.  The network could use a
   traffic policer to protect against misbehaving senders that exceed
   the approved rate, for example, by dropping packets that exceed the
   allowed transmission rate.  The required Report of Approved Rate
   allows traffic policers to check that the Report of Approved Rate
   does not exceed the Rate Request actually approved at that point in
   the network in the previous Quick-Start Request from that connection.
   The required Approved Rate report also allows traffic policers to
   check that the sender's sending rate does not exceed the rate in the
   Report of Approved Rate.

   If a router or receiver receives an Approved Rate report that is
   larger than the Rate Request in the Quick-Start Request approved for
   that sender for that connection in the previous round-trip time, then
   the router or receiver could deny future Quick-Start Requests from

Top      Up      ToC       Page 42 
   that sender, e.g., by deleting the Quick-Start Request from future
   packets from that sender.  We note that routers are not required to
   use Approved Rate reports to check if senders are cheating; this is
   at the discretion of the router.

   If a router sees a Report of Approved Rate, and did not see an
   earlier Quick-Start Request, then either the sender could be
   cheating, or the connection's path could have changed since the
   Quick-Start Request was sent.  In either case, the router could
   decide to deny future Quick-Start Requests for this connection.  In
   particular, it is reasonable for the router to deny a Quick-Start
   request if either the sender is cheating, or if the connection path
   suffers from path changes or multipathing.

   If a router approved a Quick-Start Request, but does not see a
   subsequent Approved Rate report, then there are several
   possibilities: (1) the request was denied and/or dropped downstream,
   and the sender did not send a Report of Approved Rate; (2) the
   request was approved, but the sender did not send a Report of
   Approved Rate; (3) the Approved Rate report was dropped in the
   network; or (4) the Approved Rate report took a different path from
   the Quick-Start Request.  In any of these cases, the router would be
   justified in denying future Quick-Start Requests for this connection.

   In any of the cases mentioned in the three paragraphs above (i.e., an
   Approved Rate report that is larger than the Rate Request in the
   earlier Quick-Start Request, a Report of Approved Rate with no
   preceding Rate Request, or a Rate Request with no Report of Approved
   Rate), a traffic policer may assume that Quick-Start is not being
   used appropriately, or is being used in an unsuitable environment
   (e.g., with multiple paths), and take some corresponding action.

   What are the incentives for a sender to cheat by over-sending after a
   Quick-Start Request?  Assuming that the sender's interests are
   measured by a performance metric such as the completion time for its
   connections, sometimes it might be in the sender's interests to
   cheat, and sometimes it might not;  in some cases, it could be
   difficult for the sender to judge whether it would be in its
   interests to cheat.  The incentives for a sender to cheat by over-
   sending after a Quick-Start Request are not that different from the
   incentives for a sender to cheat by over-sending even in the absence
   of Quick-Start, with one difference: the use of Quick-Start could
   help a sender evade policing actions from policers in the network.
   The Report of Approved Rate is designed to address this and to make
   it harder for senders to use Quick-Start to `cover' their cheating.

Top      Up      ToC       Page 43 
9.4.2.  Receivers Lying about Whether the Request was Approved

   One form of misbehavior would be for the receiver to lie to the
   sender about whether the Quick-Start Request was approved, by falsely
   reporting the TTL Diff and QS Nonce.  If a router that understands
   the Quick-Start Request denies the request by deleting the request or
   by zeroing the QS TTL and QS Nonce, then the receiver can "lie" about
   whether the request was approved only by successfully guessing the
   value of the TTL Diff and QS Nonce to report.  The chance of the
   receiver successfully guessing the correct value for the TTL Diff is
   1/256, and the chance of the receiver successfully guessing the QS
   nonce for a reported rate request of K is 1/(2K).

   However, if the Quick-Start Request is denied only by a non-Quick-
   Start-capable router, or by a router that is unable to zero the QS
   TTL and QS Nonce fields, then the receiver could lie about whether
   the Quick-Start Requests were approved by modifying the QS TTL in
   successive requests received from the same host.  In particular, if
   the sender does not act on a Quick-Start Request, then the receiver
   could decrement the QS TTL by one in the next request received from
   that host before calculating the TTL Diff, and decrement the QS TTL
   by two in the following received request, until the sender acts on
   one of the Quick-Start Requests.

   Unfortunately, if a router doesn't understand Quick-Start, then it is
   not possible for that router to take an active step such as zeroing
   the QS TTL and QS Nonce to deny a request.  As a result, the QS TTL
   is not a fail-safe mechanism for preventing lying by receivers in the
   case of non-Quick-Start-capable routers.

   What would be the incentives for a receiver to cheat in reporting on
   a Quick-Start Request, in the absence of a mechanism such as the QS
   Nonce?  In some cases, cheating would be of clear benefit to the
   receiver, resulting in a faster completion time for the transfer.  In
   other cases, where cheating would result in Quick-Start packets being
   dropped in the network, cheating might or might not improve the
   receiver's performance metric, depending on the details of that
   particular scenario.

9.4.3.  Receivers Lying about the Approved Rate

   A second form of receiver misbehavior would be for the receiver to
   lie to the sender about the Rate Request for an approved Quick-Start
   Request, by increasing the value of the Rate Request field.  However,
   the receiver doesn't necessarily know the Rate Request in the
   original Quick-Start Request sent by the sender, and a higher Rate
   Request reported by the receiver will only be considered valid by the
   sender if it is no higher than the Rate Request originally requested

Top      Up      ToC       Page 44 
   by the sender.  For example, if the sender sends a Quick-Start
   Request with a Rate Request of X, and the receiver reports receiving
   a Quick-Start Request with a Rate Request of Y > X, then the sender
   knows that either some router along the path malfunctioned
   (increasing the Rate Request inappropriately), or the receiver is
   lying about the Rate Request in the received packet.

   If the sender sends a Quick-Start Request with a Rate Request of Z,
   the receiver receives the Quick-Start Request with an approved Rate
   Request of X, and reports a Rate Request of Y, for X < Y <= Z, then
   the receiver only succeeds in lying to the sender about the approved
   rate if the receiver successfully reports the rightmost 2Y bits in
   the QS nonce.

   If senders often use a configured default value for the Rate Request,
   then receivers would often be able to guess the original Rate
   Request, and this would make it easier for the receiver to lie about
   the value of the Rate Request field.  Similarly, if the receiver
   often communicates with a particular sender, and the sender always
   uses the same Rate Request for that receiver, then the receiver might
   over time be able to infer the original Rate Request used by the

   There are several possible additional forms of protection against
   receivers lying about the value of the Rate Request.  One possible
   additional protection would be for a router that decreases a Rate
   Request in a Quick-Start Request to report the decrease directly to
   the sender.  However, this could lead to many reports back to the
   sender for a single request, and could also be used in address-
   spoofing attacks.

   A second limited form of protection would be for senders to use some
   degree of randomization in the requested Rate Request, so that it is
   difficult for receivers to guess the original value for the Rate
   Request.  However, this is difficult because there is a fairly coarse
   granularity in the set of rate requests available to the sender, and
   randomizing the initial request only offers limited protection, in
   any case.

9.4.4.  Collusion between Misbehaving Routers

   In addition to protecting against misbehaving receivers, it is
   necessary to protect against misbehaving routers.  Consider collusion
   between an ingress router and an egress router belonging to the same
   intranet.  The ingress router could decrement the Rate Request at the
   ingress, with the egress router increasing it again at the egress.
   The routers between the ingress and egress that approved the

Top      Up      ToC       Page 45 
   decremented rate request might not have been willing to approve the
   larger, original request.

   Another form of collusion would be for the ingress router to inform
   the egress router out-of-band of the TTL Diff and QS Nonce for the
   request packet at the ingress.  This would enable the egress router
   to modify the QS TTL and QS Nonce so that it appeared that all the
   routers along the path had approved the request.  There does not
   appear to be any protection against a colluding ingress and egress
   router.  Even if an intermediate router had deleted the Quick-Start
   Option from the packet, the ingress router could have sent the
   Quick-Start Option to the egress router out-of-band, with the egress
   router inserting the Quick-Start Option, with a modified QS TTL
   field, back in the packet.

   However, unlike ECN, there is somewhat less of an incentive for
   cooperating ingress and egress routers to collude to falsely modify
   the Quick-Start Request so that it appears to have been approved by
   all the routers along the path.  With ECN, a colluding ingress router
   could falsely mark a packet as ECN-capable, with the colluding egress
   router returning the ECN field in the IP header to its original non-
   ECN-capable codepoint, and congested routers along the path could
   have been fooled into not dropping that packet.  This collusion would
   give an unfair competitive advantage to the traffic protected by the
   colluding ingress and egress routers.

   In contrast, with Quick-Start, the collusion of the ingress and
   egress routers to make it falsely appear that a Quick-Start Request
   was approved sometimes would give an advantage to the traffic covered
   by that collusion, and sometimes would give a disadvantage, depending
   on the details of the scenario.  If some router along the path really
   does not have enough available bandwidth to approve the Quick-Start
   Request, then Quick-Start packets sent as a result of the falsely
   approved request could be dropped in the network, to the possible
   disadvantage of the connection.  Thus, while the ingress and egress
   routers could collude to prevent intermediate routers from denying a
   Quick-Start Request, it would not always be to the connection's
   advantage for this to happen.  One defense against such a collusion
   would be for some router between the ingress and egress nodes that
   denied the request to monitor connection performance, penalizing
   connections that seem to be using Quick-Start after a Quick-Start
   Request was denied, or that are reporting an Approved Rate higher
   than that actually approved by that router.

   If the congested router is ECN-capable, and the colluding ingress and
   egress routers are lying about ECN-capability as well as about
   Quick-Start, then the result could be that the Quick-Start Request
   falsely appears to the sender to have been approved, and the Quick-

Top      Up      ToC       Page 46 
   Start packets falsely appear to the congested router to be ECN-
   capable.  In this case, the colluding routers might succeed in giving
   a competitive advantage to the traffic protected by their collusion
   (if no intermediate router is monitoring to catch such misbehavior).

9.5.  Misbehaving Middleboxes and the IP TTL

   One possible difficulty is that of traffic normalizers [HKP01], or
   other middleboxes along that path, that rewrite IP TTLs in order to
   foil other kinds of attacks in the network.  If such a traffic
   normalizer rewrote the IP TTL, but did not adjust the Quick-Start TTL
   by the same amount, then the sender's mechanism for determining if
   the request was approved by all routers along the path would no
   longer be reliable.  Rewriting the IP TTL could result in false
   positives (with the sender incorrectly believing that the Quick-
   Start Request was approved) as well as false negatives (with the
   sender incorrectly believing that the Quick-Start Request was

9.6.  Attacks on Quick-Start

   As discussed in [SAF06], Quick-Start is vulnerable to two kinds of
   attacks: (1) attacks to increase the routers' processing and state
   load and (2) attacks with bogus Quick-Start Requests to temporarily
   tie up available Quick-Start bandwidth, preventing routers from
   approving Quick-Start Requests from other connections.  Routers can
   protect against the first kind of attack by applying a simple limit
   on the rate at which Quick-Start Requests will be considered by the

   The second kind of attack, to tie up the available Quick-Start
   bandwidth, is more difficult to defend against.  As discussed in
   [SAF06], Quick-Start Requests that are not going to be used, either
   because they are from malicious attackers or because they are denied
   by routers downstream, can result in short-term `wasting' of
   potential Quick-Start bandwidth, resulting in routers denying
   subsequent Quick-Start Requests that, if approved, would in fact have
   been used.

   We note that the likelihood of malicious attacks would be minimized
   significantly when Quick-Start was deployed in a controlled
   environment such as an intranet, where there was some form of
   centralized control over the users in the system.  We also note that
   this form of attack could potentially make Quick-Start unusable, but
   it would not do any further damage; in the worst case, the network
   would function as a network without Quick-Start.

Top      Up      ToC       Page 47 
   [SAF06] considers the potential of Extreme Quick-Start algorithms at
   routers, which keep per-flow state for Quick-Start connections, in
   protecting the availability of Quick-Start bandwidth in the face of
   frequent, overly large Quick-Start Requests.

9.7.  Simulations with Quick-Start

   Quick-Start was added to the NS simulator [SH02] by Srikanth
   Sundarrajan, and additional functionality was added by Pasi
   Sarolahti.  The validation test is at `test-all-quickstart' in the
   `tcl/test' directory in NS.  The initial simulation studies from
   [SH02] show a significant performance improvement using Quick-Start
   for moderate-sized flows (between 4 KB and 128 KB) in underutilized
   environments.  These studies are of file transfers, with the
   improvement measured as the relative increase in the overall
   throughput for the file transfer.  The study shows that potential
   improvement from Quick-Start is proportional to the delay-bandwidth
   product of the path.

   The Quick-Start simulations in [SAF06] explore the following: the
   potential benefit of Quick-Start for the connection, the relative
   benefits of different router-based algorithms for approving Quick-
   Start Requests, and the effectiveness of Quick-Start as a function of
   the senders' algorithms for choosing the size of the rate request.

10.  Implementation and Deployment Issues

   This section discusses some of the implementation issues with Quick-
   Start.  This section also discusses some of the key deployment
   issues, such as the chicken-and-egg deployment problems of mechanisms
   that have to be deployed in both routers and end nodes in order to
   work, and the problems posed by the wide deployment of middleboxes
   today that block the use of known or unknown IP Options.

10.1.  Implementation Issues for Sending Quick-Start Requests

   Section 4.7 discusses some of the issues with deciding the initial
   sending rate to request.  Quick-Start raises additional issues about
   the communication between the transport protocol and the application,
   and about the use of past history with Quick-Start in the end node.

   One possibility is that a protocol implementation could provide an
   API for applications to indicate when they want to request Quick-
   Start, and what rate they would like to request.  In the conventional
   socket API, this could be a socket option that is set before a
   connection is established.  Some applications, such as those that use
   TCP for bulk transfers, do not have interest in the transmission
   rate, but they might know the amount of data that can be sent

Top      Up      ToC       Page 48 
   immediately.  Based on this, the sender implementation could decide
   whether Quick-Start would be useful, and what rate should be

   We note that when Quick-Start is used, the TCP sender is required to
   save the QS Nonce and the TTL Diff when the Quick-Start Request is
   sent, and to implement an additional timer for the paced transmission
   of Quick-Start packets.

10.2.  Implementation Issues for Processing Quick-Start Requests

   A router or other network host must be able to determine the
   approximate bandwidth of its outbound network interfaces in order to
   process incoming Quick-Start rate requests, including those that
   originate from the host itself.  One possibility would be for hosts
   to rely on configuration information to determine link bandwidths;
   this has the drawback of not being robust to errors in configuration.
   Another possibility would be for network device drivers to infer the
   bandwidth for the interface and to communicate this to the IP layer.

   Particular issues will arise for wireless links with variable
   bandwidth, where decisions will have to be made about how frequently
   the host gets updates of the changing bandwidth.  It seems
   appropriate that Quick-Start Requests would be handled particularly
   conservatively for links with variable bandwidth; to avoid cases
   where Quick-Start Requests are approved, the link bandwidth is
   reduced, and the data packets that are sent end up being dropped.

   Difficult issues also arise for paths with multi-access links (e.g.,
   Ethernet).  Routers or end-nodes with multi-access links should be
   particularly conservative in granting Quick-Start Requests.  In
   particular, for some multi-access links, there may be no procedure
   for an attached node to use to determine whether all parts of the
   multi-access link have been underutilized in the recent past.

10.3.  Possible Deployment Scenarios

   Because of possible problems discussed above concerning using Quick-
   Start over some network paths and the security issues discussed in
   Section 11, the most realistic initial deployment of Quick-Start
   would most likely take place in intranets and other controlled
   environments.  Quick-Start is most useful on high bandwidth-delay
   paths that are significantly underutilized.  The primary initial
   users of Quick-Start would likely be in organizations that provide
   network services to their users and also have control over a large
   portion of the network path.

Top      Up      ToC       Page 49 
   Quick-Start is not currently intended for ubiquitous deployment in
   the global Internet.  In particular, Quick-Start should not be
   enabled by default in end-nodes or in routers; instead, when Quick-
   Start is used, it should be explicitly enabled by users or system

   Below are a few examples of networking environments where Quick-
   Start would potentially be useful.  These are the environments that
   might consider an initial deployment of Quick-Start in the routers
   and end-nodes, where the incentives for routers to deploy Quick-
   Start might be the most clear.

   * Centrally administrated organizational intranets: These intranets
     often have large network capacity, with networks that are
     underutilized for much of the time [PABL+05].  Such intranets might
     also include high-bandwidth and high-delay paths to remote sites.
     In such an environment, Quick-Start would be of benefit to users,
     and there would be a clear incentive for the deployment of Quick-
     Start in routers.  For example, Quick-Start could be quite useful
     in high-bandwidth networks used for scientific computing.

   * Wireless networks: Quick-Start could also be useful in high-delay
     environments of Cellular Wide-Area Wireless Networks, such as the
     GPRS [BW97] and their enhancements and next generations.  For
     example, GPRS EDGE (Enhanced Data for GSM Evolution) is expected to
     provide wireless bandwidth of up to 384 Kbps (roughly 32 1500-byte
     packets per second) while the GPRS round-trip times range typically
     from a few hundred milliseconds to over a second, excluding any
     possible queueing delays in the network [GPAR02].  In addition,
     these networks sometimes have variable additional delays due to
     resource allocation that could be avoided by keeping the connection
     path constantly utilized, starting from initial slow-start.  Thus,
     Quick-Start could be of significant benefit to users in these

   * Paths over satellite links: Geostationary Orbit (GEO) satellite
     links have one-way propagation delays on the order of 250 ms while
     the bandwidth can be measured in megabits per second [RFC2488].
     Because of the considerable bandwidth-delay product on the link,
     TCP's slow-start is a major performance limitation in the beginning
     of the connection.  A large initial congestion window would be
     useful to users of such satellite links.

   * Single-hop paths: Quick-Start should work well over point-to-point
     single-hop paths, e.g., from a host to an adjacent server.  Quick-
     Start would work over a single-hop IP path consisting of a multi-
     access link only if the host was able to determine if the path to
     the next IP hop has been significantly underutilized over the

Top      Up      ToC       Page 50 
     recent past.  If the multi-access link includes a layer-2 switch,
     then the attached host cannot necessarily determine the status of
     the other links in the layer-2 network.

10.4.  A Comparison with the Deployment Problems of ECN

   Given the glacially slow rate of deployment of ECN in the Internet to
   date [MAF05], it is disconcerting to note that some of the deployment
   problems of Quick-Start are even greater than those of ECN.  First,
   unlike ECN, which can be of benefit even if it is only deployed on
   one of the routers along the end-to-end path, a connection's use of
   Quick-Start requires Quick-Start deployment on all of the routers
   along the end-to-end path.  Second, unlike ECN, which uses an
   allocated field in the IP header, Quick-Start requires the extra
   complications of an IP Option, which can be difficult to pass through
   the current Internet [MAF05].

   However, in spite of these issues, there is some hope for the
   deployment of Quick-Start, at least in protected corners of the
   Internet, because the potential benefits of Quick-Start to the user
   are considerably more dramatic than those of ECN.  Rather than simply
   replacing the occasional dropped packet by an ECN-marked packet,
   Quick-Start is capable of dramatically increasing the throughput of
   connections in underutilized environments [SAF06].

11.  Security Considerations

   Sections 9.4 and 9.6 discuss the security considerations related to
   Quick-Start.  Section 9.4 discusses the potential abuse of Quick-
   Start by senders or receivers lying about whether the request was
   approved or about the approved rate, and of routers in collusion to
   misuse Quick-Start.  Section 9.5 discusses potential problems with
   traffic normalizers that rewrite IP TTLs in packet headers.  All
   these problems could result in the sender using a Rate Request that
   was inappropriately large, or thinking that a request was approved
   when it was in fact denied by at least one router along the path.
   This inappropriate use of Quick-Start could result in congestion and
   an unacceptable level of packet drops along the path.  Such
   congestion could also be part of a Denial of Service attack.

   Section 9.6 discusses a potential attack on the routers' processing
   and state load from an attack of Quick-Start Requests.  Section 9.6
   also discusses a potential attack on the available Quick-Start
   bandwidth by sending bogus Quick-Start Requests for bandwidth that
   will not, in fact, be used.  While this impacts the global usability
   of Quick-Start, it does not endanger the network as a whole since TCP
   uses standard congestion control if Quick-Start is not available.

Top      Up      ToC       Page 51 
   Section 4.7.2 discusses the potential problem of packets with Quick-
   Start Requests dropped by middleboxes along the path.

   As discussed in Section 5, for IPv4 IPsec Authentication Header
   Integrity Check Value (AH ICV) calculation, the Quick-Start Option is
   a mutable IPv4 option and hence completely zeroed for AH ICV
   calculation purposes.  This is also the treatment required by RFC
   4302 for unrecognized IPv4 options.  The IPv6 Quick-Start Option's
   IANA-allocated option type indicates that it is a mutable option;
   hence, according to RFC 4302, its option data is required to be
   zeroed for AH ICV computation purposes.  See RFC 4302 for further

   Section 6.2 discusses possible problems of Quick-Start used by
   connections carried over simple tunnels that are not compatible with
   Quick-Start.  In this case, it is possible that a Quick-Start Request
   is erroneously considered approved by the sender without the routers
   in the tunnel having individually approved the request, causing a
   false positive.

   We note two high-order points here.  First, the Quick-Start Nonce
   goes a long way towards preventing large-scale cheating.  Second,
   even if a host occasionally uses Quick-Start when it is not approved
   by the entire network path, the network will not collapse.  Quick-
   Start does not remove TCP's basic congestion control mechanisms;
   these will kick in when the network is heavily loaded, relegating any
   Quick-Start mistake to a transient.

Top      Up      ToC       Page 52 
12.  IANA Considerations

   Quick-Start requires an IP Option and a TCP Option.

12.1.  IP Option

   Quick-Start requires both an IPv4 Option Number (Section 3.1) and an
   IPv6 Option Number (Section 3.2).

   IPv4 Option Number:

   Copy Class Number Value Name
   ---- ----- ------ ----- ----
      0    00     25    25   QS    - Quick-Start

   IPv6 Option Number [RFC2460]:

   HEX         act  chg  rest
   ---         ---  ---  -----
     6          00   1   00110     Quick-Start

   For the IPv6 Option Number, the first two bits indicate that the IPv6
   node may skip over this option and continue processing the header if
   it doesn't recognize the option type, and the third bit indicates
   that the Option Data may change en-route.

   In both cases, this document should be listed as the reference

12.2.  TCP Option

   Quick-Start requires a TCP Option Number (Section 4.2).

   TCP Option Number:

   Kind Length Meaning
   ---- ------ ------------------------------
     27 8      Quick-Start Response

   This document should be listed as the reference document.

Top      Up      ToC       Page 53 
13.  Conclusions

   We are presenting the Quick-Start mechanism as a simple,
   understandable, and incrementally deployable mechanism that would be
   sufficient to allow some connections to start up with large initial
   rates, or large initial congestion windows, in over-provisioned,
   high-bandwidth environments.  We expect there will be an increasing
   number of over-provisioned, high-bandwidth environments where the
   Quick-Start mechanism, or another mechanism of similar power, could
   be of significant benefit to a wide range of traffic.  We are
   presenting the Quick-Start mechanism as a request for the community
   to provide feedback and experimentation on issues relating to Quick-

14.  Acknowledgements

   The authors wish to thank Mark Handley for discussions of these
   issues.  The authors also thank the End-to-End Research Group, the
   Transport Services Working Group, and members of IPAM's program on
   Large-Scale Communication Networks for both positive and negative
   feedback on this proposal.  We thank Srikanth Sundarrajan for the
   initial implementation of Quick-Start in the NS simulator, and for
   the initial simulation study.  Many thanks to David Black and Joe
   Touch for extensive feedback on Quick-Start and IP tunnels.  We also
   thank Mohammed Ashraf, John Border, Bob Briscoe, Martin Duke, Tom
   Dunigan, Mitchell Erblich, Gorry Fairhurst, John Heidemann, Paul
   Hyder, Dina Katabi, and Vern Paxson for feedback.  Thanks also to
   Gorry Fairhurst for the suggestion of adding the QS Nonce to the
   Report of Approved Rate.

   The version of the QS Nonce in this document is based on a proposal
   from Guohan Lu [L05].  Earlier versions of this document contained an
   eight-bit QS Nonce, and subsequent versions discussed the possibility
   of a four-bit QS Nonce.

   This document builds upon the concepts described in [RFC3390],
   [AHO98], [RFC2415], and [RFC3168].  Some of the text on Quick-Start
   in tunnels was borrowed directly from RFC 3168.

   This document is the development of a proposal originally by Amit
   Jain for Initial Window Discovery.

Top      Up      ToC       Page 54 
Appendix A.  Related Work

   The Quick-Start proposal, taken together with HighSpeed TCP [RFC3649]
   or other transport protocols for high-bandwidth transfers, could go a
   significant way towards extending the range of performance for best-
   effort traffic in the Internet.  However, there are many things that
   the Quick-Start proposal would not accomplish.  Quick-Start is not a
   congestion control mechanism, and would not help in making more
   precise use of the available bandwidth -- that is, of achieving the
   goal of high throughput with low delay and low packet-loss rates.
   Quick-Start would not give routers more control over the decrease
   rates of active connections.

   In addition, any evaluation of Quick-Start must include a discussion
   of the relative benefits of approaches that use no explicit
   information from routers, and of approaches that use more fine-
   grained feedback from routers as part of a larger congestion control
   mechanism.  We discuss several classes of proposals in the sections

A.1.  Fast Start-Ups without Explicit Information from Routers

   One possibility would be for senders to use information from the
   packet streams to learn about the available bandwidth, without
   explicit information from routers.  These techniques would not allow
   a start-up as fast as that available from Quick-Start in an
   underutilized environment; one already has to have sent some packets
   in order to use the packet stream to learn about available bandwidth.
   However, these techniques could allow a start-up considerably faster
   than the current Slow-Start.  While it seems clear that approaches
   *without* explicit feedback from the routers will be strictly less
   powerful than is possible *with* explicit feedback, it is also
   possible that approaches that are more aggressive than Slow-Start are
   possible without the complexity involved in obtaining explicit
   feedback from routers.

   Periodic packet streams:
   [JD02] explores the use of periodic packet streams to estimate the
   available bandwidth along a path.  The idea is that the one-way
   delays of a periodic packet stream show an increasing trend when the
   stream's rate is higher than the available bandwidth (due to an
   increasing queue).  While [JD02] states that the proposed mechanism
   does not cause significant increases in network utilization, losses,
   or delays when done by one flow at a time, the approach could be
   problematic if conducted concurrently by a number of flows.  [JD02]
   also gives an overview of some of the earlier work on inferring the
   available bandwidth from packet trains.

Top      Up      ToC       Page 55 
   The Swift Start proposal from [PRAKS02] combines packet-pair and
   packet-pacing techniques.  An initial congestion window of four
   segments is used to estimate the available bandwidth along the path.
   This estimate is then used to dramatically increase the congestion
   window during the second RTT of data transmission.

   In the TCP/SPAND proposal from [ZQK00] for speeding up short data
   transfers, network performance information would be shared among many
   co-located hosts to estimate each connection's fair share of the
   network resources.  Based on such estimation and the transfer size,
   the TCP sender would determine the optimal initial congestion window
   size.  The design for TCP/SPAND uses a performance gateway that
   monitors all traffic entering and leaving an organization's network.

   Sharing information among TCP connections:
   The Congestion Manager [RFC3124] and TCP control block sharing
   [RFC2140] both propose sharing congestion information among multiple
   TCP connections with the same endpoints.  With the Congestion
   Manager, a new TCP connection could start with a high initial cwnd,
   if it was sharing the path and the cwnd with a pre-existing TCP
   connection to the same destination that had already obtained a high
   congestion window.  RFC 2140 discusses ensemble sharing, where an
   established connection's congestion window could be `divided up' to
   be shared with a new connection to the same host.  However, neither
   of these approaches addresses the case of a connection to a new
   destination, with no existing or recent connection (and therefore
   congestion control state) to that destination.

   While continued research on the limits of the ability of TCP and
   other transport protocols to learn of available bandwidth without
   explicit feedback from the router seems useful, we note that there
   are several fundamental advantages of explicit feedback from routers.

   (1) Explicit feedback is faster than implicit feedback:
       One advantage of explicit feedback from the routers is that it
       allows the transport sender to reliably learn of available
       bandwidth in one round-trip time.

   (2) Explicit feedback is more reliable than implicit feedback:
       Techniques that attempt to assess the available bandwidth at
       connection start-up using implicit techniques are more error-
       prone than techniques that involve every element in the network
       path.  While explicit information from the network can be wrong,
       it has a much better chance of being appropriate than an end-host
       trying to *estimate* an appropriate sending rate using "block
       box" probing techniques of the entire path.

Top      Up      ToC       Page 56 
A.2.  Optimistic Sending without Explicit Information from Routers

   Another possibility that has been suggested [S02] is for the sender
   to start with a large initial window without explicit permission from
   the routers and without bandwidth estimation techniques and for the
   first packet of the initial window to contain information, such as
   the size or sending rate of the initial window.  The proposal would
   be that congested routers would use this information in the first
   data packet to drop or delay many or all of the packets from that
   initial window.  In this way, a flow's optimistically large initial
   window would not force the router to drop packets from competing
   flows in the network.  Such an approach would seem to require some
   mechanism for the sender to ensure that the routers along the path
   understood the mechanism for marking the first packet of a large
   initial window.

   Obviously, there would be a number of questions to consider about an
   approach of optimistic sending.

   (1) Incremental deployment:
       One question would be the potential complications of incremental
       deployment, where some of the routers along the path might not
       understand the packet information describing the initial window.

   (2) Congestion collapse:
       There could also be concerns about congestion collapse if many
       flows used large initial windows, many packets were dropped from
       optimistic initial windows, and many congested links ended up
       carrying packets that are only going to be dropped downstream.

   (3) Distributed Denial of Service attacks:
       A third question would be the potential role of optimistic
       senders in amplifying the damage done by a Distributed Denial of
       Service (DDoS) attack (assuming attackers use compliant
       congestion control in the hopes of "flying under the radar").

   (4) Performance hits if a packet is dropped:
       A fourth issue would be to quantify the performance hit to the
       connection when a packet is dropped from one of the initial

A.3.  Fast Start-Ups with Other Information from Routers

   There have been several proposals somewhat similar to Quick-Start,
   where the transport protocol collects explicit information from the
   routers along the path.

Top      Up      ToC       Page 57 
   An IP Option about the free buffer size:
   In related work, [P00] investigates the use of a slightly different
   IP option for TCP connections to discover the available bandwidth
   along the path.  In that proposal, the IP option would query the
   routers along the path about the smallest available free buffer size.
   Also, the IP option would have been sent after the initial SYN
   exchange, when the TCP sender already had an estimate of the round-
   trip time.

   The Performance Transparency Protocol:
   The Performance Transparency Protocol (PTP) includes a proposal for a
   single PTP packet that would collect information from routers along
   the path from the sender to the receiver [W00].  For example, a
   single PTP packet could be used to determine the bottleneck bandwidth
   along a path.

   Additional proposals for end nodes to collect explicit information
   from routers include one variant of Explicit Transport Error
   Notification (ETEN), which includes a cumulative mechanism to notify
   endpoints of aggregate congestion statistics along the path [KAPS02].
   (A second variant in [KSEPA04] does not depend on cumulative
   congestion statistics from the network.)

A.4.  Fast Start-Ups with more Fine-Grained Feedback from Routers

   Proposals for more fine-grained, congestion-related feedback from
   routers include XCP [KHR02], MaxNet [MaxNet], and AntiECN marking
   [K03].  Appendix B.6 discusses in more detail the relationship
   between Quick-Start and proposals for more fine-grained per-packet
   feedback from routers.

   Proposals, such as XCP for new congestion control mechanisms based on
   more feedback from routers, are more powerful than Quick-Start, but
   also are more complex to understand and more difficult to deploy.
   XCP routers maintain no per-flow state, but provide more fine-
   grained feedback to end-nodes than the one-bit congestion feedback of
   ECN.  The per-packet feedback from XCP can be positive or negative,
   and specifies the increase or decrease in the sender's congestion
   window when this packet is acknowledged.  XCP is a full-fledge
   congestion control scheme, whereas Quick-Start represents a quick
   check to determine if the network path is significantly underutilized
   such that a connection can start faster and then fall back to TCP's
   standard congestion control algorithms.

Top      Up      ToC       Page 58 
   The AntiECN proposal is for a single bit in the packet header that
   routers could set to indicate that they are underutilized.  For each
   TCP ACK arriving at the sender indicating that a packet has been
   received with the Anti-ECN bit set, the sender would be able to
   increase its congestion window by one packet, as it would during

A.5.  Fast Start-Ups with Lower-Than-Best-Effort Service

   There have been proposals for routers to provide a Lower Effort
   differentiated service that would be lower than best effort
   [RFC3662].  Such a service could carry traffic for which delivery is
   strictly optional, or could carry traffic that is important but that
   has low priority in terms of time.  Because it does not interfere
   with best-effort traffic, Lower Effort services could be used by
   transport protocols that start up faster than slow-start.  For
   example, [SGF05] is a proposal for the transport sender to use low-
   priority traffic for much of the initial traffic, with routers
   configured to use strict priority queueing.

   A separate but related issue is that of below-best-effort TCP,
   variants of TCP that would not rely on Lower Effort services in the
   network, but would approximate below-best-effort traffic by detecting
   and responding to congestion sooner than standard TCP.  TCP Nice
   [V02] and TCP Low Priority (TCP-LP) [KK03] are two such proposals for
   below-best-effort TCP, with the purpose of allowing TCP connections
   to use the bandwidth unused by TCP and other traffic in a non-
   intrusive fashion.  Both TCP Nice and TCP Low Priority use the
   default slow-start mechanisms of TCP.

   We note that Quick-Start is quite different from either a Lower-
   Effort service or a below-best-effort variant of TCP.  Unlike these
   proposals, Quick-Start is intended to be useful for best-effort
   traffic that wishes to receive at least as much bandwidth as
   competing best-effort connections.

Next RFC Part