tech-invite   World Map     

IETF     RFCs     Groups     SIP     ABNFs    |    3GPP     Specs     Gloss.     Arch.     IMS     UICC    |    Misc.    |    search     info

RFC 3272


Overview and Principles of Internet Traffic Engineering

Part 2 of 3, p. 21 to 43
Prev RFC Part       Next RFC Part


prevText      Top      Up      ToC       Page 21 
3.0 Traffic Engineering Process Model(s)

   This section describes a generic process model that captures the high
   level practical aspects of Internet traffic engineering in an
   operational context.  The process model is described as a sequence of
   actions that a traffic engineer, or more generally a traffic
   engineering system, must perform to optimize the performance of an
   operational network (see also [RFC-2702, AWD2]).  The process model
   described here represents the broad activities common to most traffic
   engineering methodologies although the details regarding how traffic
   engineering is executed may differ from network to network.  This
   process model may be enacted explicitly or implicitly, by an
   automaton and/or by a human.

Top      Up      ToC       Page 22 
   The traffic engineering process model is iterative [AWD2].  The four
   phases of the process model described below are repeated continually.

   The first phase of the TE process model is to define the relevant
   control policies that govern the operation of the network.  These
   policies may depend upon many factors including the prevailing
   business model, the network cost structure, the operating
   constraints, the utility model, and optimization criteria.

   The second phase of the process model is a feedback mechanism
   involving the acquisition of measurement data from the operational
   network.  If empirical data is not readily available from the
   network, then synthetic workloads may be used instead which reflect
   either the prevailing or the expected workload of the network.
   Synthetic workloads may be derived by estimation or extrapolation
   using prior empirical data.  Their derivation may also be obtained
   using mathematical models of traffic characteristics or other means.

   The third phase of the process model is to analyze the network state
   and to characterize traffic workload.  Performance analysis may be
   proactive and/or reactive.  Proactive performance analysis identifies
   potential problems that do not exist, but could manifest in the
   future.  Reactive performance analysis identifies existing problems,
   determines their cause through diagnosis, and evaluates alternative
   approaches to remedy the problem, if necessary.  A number of
   quantitative and qualitative techniques may be used in the analysis
   process, including modeling based analysis and simulation.  The
   analysis phase of the process model may involve investigating the
   concentration and distribution of traffic across the network or
   relevant subsets of the network, identifying the characteristics of
   the offered traffic workload, identifying existing or potential
   bottlenecks, and identifying network pathologies such as ineffective
   link placement, single points of failures, etc.  Network pathologies
   may result from many factors including inferior network architecture,
   inferior network design, and configuration problems.  A traffic
   matrix may be constructed as part of the analysis process.  Network
   analysis may also be descriptive or prescriptive.

   The fourth phase of the TE process model is the performance
   optimization of the network.  The performance optimization phase
   involves a decision process which selects and implements a set of
   actions from a set of alternatives.  Optimization actions may include
   the use of appropriate techniques to either control the offered
   traffic or to control the distribution of traffic across the network.
   Optimization actions may also involve adding additional links or
   increasing link capacity, deploying additional hardware such as
   routers and switches, systematically adjusting parameters associated
   with routing such as IGP metrics and BGP attributes, and adjusting

Top      Up      ToC       Page 23 
   traffic management parameters.  Network performance optimization may
   also involve starting a network planning process to improve the
   network architecture, network design, network capacity, network
   technology, and the configuration of network elements to accommodate
   current and future growth.

3.1 Components of the Traffic Engineering Process Model

   The key components of the traffic engineering process model include a
   measurement subsystem, a modeling and analysis subsystem, and an
   optimization subsystem.  The following subsections examine these
   components as they apply to the traffic engineering process model.

3.2 Measurement

   Measurement is crucial to the traffic engineering function.  The
   operational state of a network can be conclusively determined only
   through measurement.  Measurement is also critical to the
   optimization function because it provides feedback data which is used
   by traffic engineering control subsystems.  This data is used to
   adaptively optimize network performance in response to events and
   stimuli originating within and outside the network.  Measurement is
   also needed to determine the quality of network services and to
   evaluate the effectiveness of traffic engineering policies.
   Experience suggests that measurement is most effective when acquired
   and applied systematically.

   When developing a measurement system to support the traffic
   engineering function in IP networks, the following questions should
   be carefully considered: Why is measurement needed in this particular
   context? What parameters are to be measured?  How should the
   measurement be accomplished?  Where should the measurement be
   performed? When should the measurement be performed?  How frequently
   should the monitored variables be measured?  What level of
   measurement accuracy and reliability is desirable? What level of
   measurement accuracy and reliability is realistically attainable? To
   what extent can the measurement system permissibly interfere with the
   monitored network components and variables? What is the acceptable
   cost of measurement? The answers to these questions will determine
   the measurement tools and methodologies appropriate in any given
   traffic engineering context.

   It should also be noted that there is a distinction between
   measurement and evaluation.  Measurement provides raw data concerning
   state parameters and variables of monitored network elements.
   Evaluation utilizes the raw data to make inferences regarding the
   monitored system.

Top      Up      ToC       Page 24 
   Measurement in support of the TE function can occur at different
   levels of abstraction.  For example, measurement can be used to
   derive packet level characteristics, flow level characteristics, user
   or customer level characteristics, traffic aggregate characteristics,
   component level characteristics, and network wide characteristics.

3.3 Modeling, Analysis, and Simulation

   Modeling and analysis are important aspects of Internet traffic
   engineering.  Modeling involves constructing an abstract or physical
   representation which depicts relevant traffic characteristics and
   network attributes.

   A network model is an abstract representation of the network which
   captures relevant network features, attributes, and characteristics,
   such as link and nodal attributes and constraints.  A network model
   may facilitate analysis and/or simulation which can be used to
   predict network performance under various conditions as well as to
   guide network expansion plans.

   In general, Internet traffic engineering models can be classified as
   either structural or behavioral.  Structural models focus on the
   organization of the network and its components.  Behavioral models
   focus on the dynamics of the network and the traffic workload.
   Modeling for Internet traffic engineering may also be formal or

   Accurate behavioral models for traffic sources are particularly
   useful for analysis.  Development of behavioral traffic source models
   that are consistent with empirical data obtained from operational
   networks is a major research topic in Internet traffic engineering.
   These source models should also be tractable and amenable to
   analysis.  The topic of source models for IP traffic is a research
   topic and is therefore outside the scope of this document.  Its
   importance, however, must be emphasized.

   Network simulation tools are extremely useful for traffic
   engineering.  Because of the complexity of realistic quantitative
   analysis of network behavior, certain aspects of network performance
   studies can only be conducted effectively using simulation.  A good
   network simulator can be used to mimic and visualize network
   characteristics under various conditions in a safe and non-disruptive
   manner.  For example, a network simulator may be used to depict
   congested resources and hot spots, and to provide hints regarding
   possible solutions to network performance problems.  A good simulator
   may also be used to validate the effectiveness of planned solutions
   to network issues without the need to tamper with the operational
   network, or to commence an expensive network upgrade which may not

Top      Up      ToC       Page 25 
   achieve the desired objectives.  Furthermore, during the process of
   network planning, a network simulator may reveal pathologies such as
   single points of failure which may require additional redundancy, and
   potential bottlenecks and hot spots which may require additional

   Routing simulators are especially useful in large networks.  A
   routing simulator may identify planned links which may not actually
   be used to route traffic by the existing routing protocols.
   Simulators can also be used to conduct scenario based and
   perturbation based analysis, as well as sensitivity studies.
   Simulation results can be used to initiate appropriate actions in
   various ways.  For example, an important application of network
   simulation tools is to investigate and identify how best to make the
   network evolve and grow, in order to accommodate projected future

3.4 Optimization

   Network performance optimization involves resolving network issues by
   transforming such issues into concepts that enable a solution,
   identification of a solution, and implementation of the solution.
   Network performance optimization can be corrective or perfective.  In
   corrective optimization, the goal is to remedy a problem that has
   occurred or that is incipient.  In perfective optimization, the goal
   is to improve network performance even when explicit problems do not
   exist and are not anticipated.

   Network performance optimization is a continual process, as noted
   previously.  Performance optimization iterations may consist of
   real-time optimization sub-processes and non-real-time network
   planning sub-processes.  The difference between real-time
   optimization and network planning is primarily in the relative time-
   scale in which they operate and in the granularity of actions.  One
   of the objectives of a real-time optimization sub-process is to
   control the mapping and distribution of traffic over the existing
   network infrastructure to avoid and/or relieve congestion, to assure
   satisfactory service delivery, and to optimize resource utilization.
   Real-time optimization is needed because random incidents such as
   fiber cuts or shifts in traffic demand will occur irrespective of how
   well a network is designed.  These incidents can cause congestion and
   other problems to manifest in an operational network.  Real-time
   optimization must solve such problems in small to medium time-scales
   ranging from micro-seconds to minutes or hours.  Examples of real-
   time optimization include queue management, IGP/BGP metric tuning,
   and using technologies such as MPLS explicit LSPs to change the paths
   of some traffic trunks [XIAO].

Top      Up      ToC       Page 26 
   One of the functions of the network planning sub-process is to
   initiate actions to systematically evolve the architecture,
   technology, topology, and capacity of a network.  When a problem
   exists in the network, real-time optimization should provide an
   immediate remedy.  Because a prompt response is necessary, the real-
   time solution may not be the best possible solution.  Network
   planning may subsequently be needed to refine the solution and
   improve the situation.  Network planning is also required to expand
   the network to support traffic growth and changes in traffic
   distribution over time.  As previously noted, a change in the
   topology and/or capacity of the network may be the outcome of network

   Clearly, network planning and real-time performance optimization are
   mutually complementary activities.  A well-planned and designed
   network makes real-time optimization easier, while a systematic
   approach to real-time network performance optimization allows network
   planning to focus on long term issues rather than tactical
   considerations.  Systematic real-time network performance
   optimization also provides valuable inputs and insights toward
   network planning.

   Stability is an important consideration in real-time network
   performance optimization.  This aspect will be repeatedly addressed
   throughout this memo.

4.0 Historical Review and Recent Developments

   This section briefly reviews different traffic engineering approaches
   proposed and implemented in telecommunications and computer networks.
   The discussion is not intended to be comprehensive.  It is primarily
   intended to illuminate pre-existing perspectives and prior art
   concerning traffic engineering in the Internet and in legacy
   telecommunications networks.

4.1 Traffic Engineering in Classical Telephone Networks

   This subsection presents a brief overview of traffic engineering in
   telephone networks which often relates to the way user traffic is
   steered from an originating node to the terminating node.  This
   subsection presents a brief overview of this topic.  A detailed
   description of the various routing strategies applied in telephone
   networks is included in the book by G. Ash [ASH2].

   The early telephone network relied on static hierarchical routing,
   whereby routing patterns remained fixed independent of the state of
   the network or time of day.  The hierarchy was intended to
   accommodate overflow traffic, improve network reliability via

Top      Up      ToC       Page 27 
   alternate routes, and prevent call looping by employing strict
   hierarchical rules.  The network was typically over-provisioned since
   a given fixed route had to be dimensioned so that it could carry user
   traffic during a busy hour of any busy day.  Hierarchical routing in
   the telephony network was found to be too rigid upon the advent of
   digital switches and stored program control which were able to manage
   more complicated traffic engineering rules.

   Dynamic routing was introduced to alleviate the routing inflexibility
   in the static hierarchical routing so that the network would operate
   more efficiently.  This resulted in significant economic gains
   [HUSS87].  Dynamic routing typically reduces the overall loss
   probability by 10 to 20 percent (compared to static hierarchical
   routing).  Dynamic routing can also improve network resilience by
   recalculating routes on a per-call basis and periodically updating

   There are three main types of dynamic routing in the telephone
   network.  They are time-dependent routing, state-dependent routing
   (SDR), and event dependent routing (EDR).

   In time-dependent routing, regular variations in traffic loads (such
   as time of day or day of week) are exploited in pre-planned routing
   tables.  In state-dependent routing, routing tables are updated
   online according to the current state of the network (e.g., traffic
   demand, utilization, etc.).  In event dependent routing, routing
   changes are incepted by events (such as call setups encountering
   congested or blocked links) whereupon new paths are searched out
   using learning models.  EDR methods are real-time adaptive, but they
   do not require global state information as does SDR.  Examples of EDR
   schemes include the dynamic alternate routing (DAR) from BT, the
   state-and-time dependent routing (STR) from NTT, and the success-to-
   the-top (STT) routing from AT&T.

   Dynamic non-hierarchical routing (DNHR) is an example of dynamic
   routing that was introduced in the AT&T toll network in the 1980's to
   respond to time-dependent information such as regular load variations
   as a function of time.  Time-dependent information in terms of load
   may be divided into three time scales: hourly, weekly, and yearly.
   Correspondingly, three algorithms are defined to pre-plan the routing
   tables.  The network design algorithm operates over a year-long
   interval while the demand servicing algorithm operates on a weekly
   basis to fine tune link sizes and routing tables to correct forecast
   errors on the yearly basis.  At the smallest time scale, the routing
   algorithm is used to make limited adjustments based on daily traffic
   variations.  Network design and demand servicing are computed using
   offline calculations.  Typically, the calculations require extensive
   searches on possible routes.  On the other hand, routing may need

Top      Up      ToC       Page 28 
   online calculations to handle crankback.  DNHR adopts a "two-link"
   approach whereby a path can consist of two links at most.  The
   routing algorithm presents an ordered list of route choices between
   an originating switch and a terminating switch.  If a call overflows,
   a via switch (a tandem exchange between the originating switch and
   the terminating switch) would send a crankback signal to the
   originating switch.  This switch would then select the next route,
   and so on, until there are no alternative routes available in which
   the call is blocked.

4.2 Evolution of Traffic Engineering in Packet Networks

   This subsection reviews related prior work that was intended to
   improve the performance of data networks.  Indeed, optimization of
   the performance of data networks started in the early days of the
   ARPANET.  Other early commercial networks such as SNA also recognized
   the importance of performance optimization and service

   In terms of traffic management, the Internet has been a best effort
   service environment until recently.  In particular, very limited
   traffic management capabilities existed in IP networks to provide
   differentiated queue management and scheduling services to packets
   belonging to different classes.

   In terms of routing control, the Internet has employed distributed
   protocols for intra-domain routing.  These protocols are highly
   scalable and resilient.  However, they are based on simple algorithms
   for path selection which have very limited functionality to allow
   flexible control of the path selection process.

   In the following subsections, the evolution of practical traffic
   engineering mechanisms in IP networks and its predecessors are

4.2.1 Adaptive Routing in the ARPANET

   The early ARPANET recognized the importance of adaptive routing where
   routing decisions were based on the current state of the network
   [MCQ80].  Early minimum delay routing approaches forwarded each
   packet to its destination along a path for which the total estimated
   transit time was the smallest.  Each node maintained a table of
   network delays, representing the estimated delay that a packet would
   experience along a given path toward its destination.  The minimum
   delay table was periodically transmitted by a node to its neighbors.
   The shortest path, in terms of hop count, was also propagated to give
   the connectivity information.

Top      Up      ToC       Page 29 
   One drawback to this approach is that dynamic link metrics tend to
   create "traffic magnets" causing congestion to be shifted from one
   location of a network to another location, resulting in oscillation
   and network instability.

4.2.2 Dynamic Routing in the Internet

   The Internet evolved from the APARNET and adopted dynamic routing
   algorithms with distributed control to determine the paths that
   packets should take en-route to their destinations.  The routing
   algorithms are adaptations of shortest path algorithms where costs
   are based on link metrics.  The link metric can be based on static or
   dynamic quantities.  The link metric based on static quantities may
   be assigned administratively according to local criteria.  The link
   metric based on dynamic quantities may be a function of a network
   congestion measure such as delay or packet loss.

   It was apparent early that static link metric assignment was
   inadequate because it can easily lead to unfavorable scenarios in
   which some links become congested while others remain lightly loaded.
   One of the many reasons for the inadequacy of static link metrics is
   that link metric assignment was often done without considering the
   traffic matrix in the network.  Also, the routing protocols did not
   take traffic attributes and capacity constraints into account when
   making routing decisions.  This results in traffic concentration
   being localized in subsets of the network infrastructure and
   potentially causing congestion.  Even if link metrics are assigned in
   accordance with the traffic matrix, unbalanced loads in the network
   can still occur due to a number factors including:

      -  Resources may not be deployed in the most optimal locations
         from a routing perspective.

      -  Forecasting errors in traffic volume and/or traffic

      -  Dynamics in traffic matrix due to the temporal nature of
         traffic patterns, BGP policy change from peers, etc.

   The inadequacy of the legacy Internet interior gateway routing system
   is one of the factors motivating the interest in path oriented
   technology with explicit routing and constraint-based routing
   capability such as MPLS.

Top      Up      ToC       Page 30 
4.2.3 ToS Routing

   Type-of-Service (ToS) routing involves different routes going to the
   same destination with selection dependent upon the ToS field of an IP
   packet [RFC-2474].  The ToS classes may be classified as low delay
   and high throughput.  Each link is associated with multiple link
   costs and each link cost is used to compute routes for a particular
   ToS.  A separate shortest path tree is computed for each ToS.  The
   shortest path algorithm must be run for each ToS resulting in very
   expensive computation.  Classical ToS-based routing is now outdated
   as the IP header field has been replaced by a Diffserv field.
   Effective traffic engineering is difficult to perform in classical
   ToS-based routing because each class still relies exclusively on
   shortest path routing which results in localization of traffic
   concentration within the network.

4.2.4 Equal Cost Multi-Path

   Equal Cost Multi-Path (ECMP) is another technique that attempts to
   address the deficiency in the Shortest Path First (SPF) interior
   gateway routing systems [RFC-2328].  In the classical SPF algorithm,
   if two or more shortest paths exist to a given destination, the
   algorithm will choose one of them.  The algorithm is modified
   slightly in ECMP so that if two or more equal cost shortest paths
   exist between two nodes, the traffic between the nodes is distributed
   among the multiple equal-cost paths.  Traffic distribution across the
   equal-cost paths is usually performed in one of two ways: (1)
   packet-based in a round-robin fashion, or (2) flow-based using
   hashing on source and destination IP addresses and possibly other
   fields of the IP header.  The first approach can easily cause out-
   of-order packets while the second approach is dependent upon the
   number and distribution of flows.  Flow-based load sharing may be
   unpredictable in an enterprise network where the number of flows is
   relatively small and less heterogeneous (for example, hashing may not
   be uniform), but it is generally effective in core public networks
   where the number of flows is large and heterogeneous.

   In ECMP, link costs are static and bandwidth constraints are not
   considered, so ECMP attempts to distribute the traffic as equally as
   possible among the equal-cost paths independent of the congestion
   status of each path.  As a result, given two equal-cost paths, it is
   possible that one of the paths will be more congested than the other.
   Another drawback of ECMP is that load sharing cannot be achieved on
   multiple paths which have non-identical costs.

Top      Up      ToC       Page 31 
4.2.5 Nimrod

   Nimrod is a routing system developed to provide heterogeneous service
   specific routing in the Internet, while taking multiple constraints
   into account [RFC-1992].  Essentially, Nimrod is a link state routing
   protocol which supports path oriented packet forwarding.  It uses the
   concept of maps to represent network connectivity and services at
   multiple levels of abstraction.  Mechanisms are provided to allow
   restriction of the distribution of routing information.

   Even though Nimrod did not enjoy deployment in the public Internet, a
   number of key concepts incorporated into the Nimrod architecture,
   such as explicit routing which allows selection of paths at
   originating nodes, are beginning to find applications in some recent
   constraint-based routing initiatives.

4.3 Overlay Model

   In the overlay model, a virtual-circuit network, such as ATM, frame
   relay, or WDM, provides virtual-circuit connectivity between routers
   that are located at the edges of a virtual-circuit cloud.  In this
   mode, two routers that are connected through a virtual circuit see a
   direct adjacency between themselves independent of the physical route
   taken by the virtual circuit through the ATM, frame relay, or WDM
   network.  Thus, the overlay model essentially decouples the logical
   topology that routers see from the physical topology that the ATM,
   frame relay, or WDM network manages.  The overlay model based on ATM
   or frame relay enables a network administrator or an automaton to
   employ traffic engineering concepts to perform path optimization by
   re-configuring or rearranging the virtual circuits so that a virtual
   circuit on a congested or sub-optimal physical link can be re-routed
   to a less congested or more optimal one.  In the overlay model,
   traffic engineering is also employed to establish relationships
   between the traffic management parameters (e.g., PCR, SCR, and MBS
   for ATM) of the virtual-circuit technology and the actual traffic
   that traverses each circuit.  These relationships can be established
   based upon known or projected traffic profiles, and some other

   The overlay model using IP over ATM requires the management of two
   separate networks with different technologies (IP and ATM) resulting
   in increased operational complexity and cost.  In the fully-meshed
   overlay model, each router would peer to every other router in the
   network, so that the total number of adjacencies is a quadratic
   function of the number of routers.  Some of the issues with the
   overlay model are discussed in [AWD2].

Top      Up      ToC       Page 32 
4.4 Constrained-Based Routing

   Constraint-based routing refers to a class of routing systems that
   compute routes through a network subject to the satisfaction of a set
   of constraints and requirements.  In the most general setting,
   constraint-based routing may also seek to optimize overall network
   performance while minimizing costs.

   The constraints and requirements may be imposed by the network itself
   or by administrative policies.  Constraints may include bandwidth,
   hop count, delay, and policy instruments such as resource class
   attributes.  Constraints may also include domain specific attributes
   of certain network technologies and contexts which impose
   restrictions on the solution space of the routing function.  Path
   oriented technologies such as MPLS have made constraint-based routing
   feasible and attractive in public IP networks.

   The concept of constraint-based routing within the context of MPLS
   traffic engineering requirements in IP networks was first defined in

   Unlike QoS routing (for example, see [RFC-2386] and [MA]) which
   generally addresses the issue of routing individual traffic flows to
   satisfy prescribed flow based QoS requirements subject to network
   resource availability, constraint-based routing is applicable to
   traffic aggregates as well as flows and may be subject to a wide
   variety of constraints which may include policy restrictions.

4.5 Overview of Other IETF Projects Related to Traffic Engineering

   This subsection reviews a number of IETF activities pertinent to
   Internet traffic engineering.  These activities are primarily
   intended to evolve the IP architecture to support new service
   definitions which allow preferential or differentiated treatment to
   be accorded to certain types of traffic.

4.5.1 Integrated Services

   The IETF Integrated Services working group developed the integrated
   services (Intserv) model.  This model requires resources, such as
   bandwidth and buffers, to be reserved a priori for a given traffic
   flow to ensure that the quality of service requested by the traffic
   flow is satisfied.  The integrated services model includes additional
   components beyond those used in the best-effort model such as packet
   classifiers, packet schedulers, and admission control.  A packet
   classifier is used to identify flows that are to receive a certain
   level of service.  A packet scheduler handles the scheduling of

Top      Up      ToC       Page 33 
   service to different packet flows to ensure that QoS commitments are
   met.  Admission control is used to determine whether a router has the
   necessary resources to accept a new flow.

   Two services have been defined under the Integrated Services model:
   guaranteed service [RFC-2212] and controlled-load service [RFC-2211].

   The guaranteed service can be used for applications requiring bounded
   packet delivery time.  For this type of application, data that is
   delivered to the application after a pre-defined amount of time has
   elapsed is usually considered worthless.  Therefore, guaranteed
   service was intended to provide a firm quantitative bound on the
   end-to-end packet delay for a flow.  This is accomplished by
   controlling the queuing delay on network elements along the data flow
   path.  The guaranteed service model does not, however, provide
   bounds on jitter (inter-arrival times between consecutive packets).

   The controlled-load service can be used for adaptive applications
   that can tolerate some delay but are sensitive to traffic overload
   conditions.  This type of application typically functions
   satisfactorily when the network is lightly loaded but its performance
   degrades significantly when the network is heavily loaded.
   Controlled-load service, therefore, has been designed to provide
   approximately the same service as best-effort service in a lightly
   loaded network regardless of actual network conditions.  Controlled-
   load service is described qualitatively in that no target values of
   delay or loss are specified.

   The main issue with the Integrated Services model has been
   scalability [RFC-2998], especially in large public IP networks which
   may potentially have millions of active micro-flows in transit

   A notable feature of the Integrated Services model is that it
   requires explicit signaling of QoS requirements from end systems to
   routers [RFC-2753].  The Resource Reservation Protocol (RSVP)
   performs this signaling function and is a critical component of the
   Integrated Services model.  The RSVP protocol is described next.

4.5.2 RSVP

   RSVP is a soft state signaling protocol [RFC-2205].  It supports
   receiver initiated establishment of resource reservations for both
   multicast and unicast flows.  RSVP was originally developed as a
   signaling protocol within the integrated services framework for
   applications to communicate QoS requirements to the network and for
   the network to reserve relevant resources to satisfy the QoS
   requirements [RFC-2205].

Top      Up      ToC       Page 34 
   Under RSVP, the sender or source node sends a PATH message to the
   receiver with the same source and destination addresses as the
   traffic which the sender will generate.  The PATH message contains:
   (1) a sender Tspec specifying the characteristics of the traffic, (2)
   a sender Template specifying the format of the traffic, and (3) an
   optional Adspec which is used to support the concept of one pass with
   advertising" (OPWA) [RFC-2205].  Every intermediate router along the
   path forwards the PATH Message to the next hop determined by the
   routing protocol.  Upon receiving a PATH Message, the receiver
   responds with a RESV message which includes a flow descriptor used to
   request resource reservations.  The RESV message travels to the
   sender or source node in the opposite direction along the path that
   the PATH message traversed.  Every intermediate router along the path
   can reject or accept the reservation request of the RESV message.  If
   the request is rejected, the rejecting router will send an error
   message to the receiver and the signaling process will terminate.  If
   the request is accepted, link bandwidth and buffer space are
   allocated for the flow and the related flow state information is
   installed in the router.

   One of the issues with the original RSVP specification was
   Scalability.  This is because reservations were required for micro-
   flows, so that the amount of state maintained by network elements
   tends to increase linearly with the number of micro-flows.  These
   issues are described in [RFC-2961].

   Recently, RSVP has been modified and extended in several ways to
   mitigate the scaling problems.  As a result, it is becoming a
   versatile signaling protocol for the Internet.  For example, RSVP has
   been extended to reserve resources for aggregation of flows, to set
   up MPLS explicit label switched paths, and to perform other signaling
   functions within the Internet.  There are also a number of proposals
   to reduce the amount of refresh messages required to maintain
   established RSVP sessions [RFC-2961].

   A number of IETF working groups have been engaged in activities
   related to the RSVP protocol.  These include the original RSVP
   working group, the MPLS working group, the Resource Allocation
   Protocol working group, and the Policy Framework working group.

4.5.3 Differentiated Services

   The goal of the Differentiated Services (Diffserv) effort within the
   IETF is to devise scalable mechanisms for categorization of traffic
   into behavior aggregates, which ultimately allows each behavior
   aggregate to be treated differently, especially when there is a
   shortage of resources such as link bandwidth and buffer space [RFC-
   2475].  One of the primary motivations for the Diffserv effort was to

Top      Up      ToC       Page 35 
   devise alternative mechanisms for service differentiation in the
   Internet that mitigate the scalability issues encountered with the
   Intserv model.

   The IETF Diffserv working group has defined a Differentiated Services
   field in the IP header (DS field).  The DS field consists of six bits
   of the part of the IP header formerly known as TOS octet.  The DS
   field is used to indicate the forwarding treatment that a packet
   should receive at a node [RFC-2474].  The Diffserv working group has
   also standardized a number of Per-Hop Behavior (PHB) groups.  Using
   the PHBs, several classes of services can be defined using different
   classification, policing, shaping, and scheduling rules.

   For an end-user of network services to receive Differentiated
   Services from its Internet Service Provider (ISP), it may be
   necessary for the user to have a Service Level Agreement (SLA) with
   the ISP.  An SLA may explicitly or implicitly specify a Traffic
   Conditioning Agreement (TCA) which defines classifier rules as well
   as metering, marking, discarding, and shaping rules.

   Packets are classified, and possibly policed and shaped at the
   ingress to a Diffserv network.  When a packet traverses the boundary
   between different Diffserv domains, the DS field of the packet may be
   re-marked according to existing agreements between the domains.

   Differentiated Services allows only a finite number of service
   classes to be indicated by the DS field.  The main advantage of the
   Diffserv approach relative to the Intserv model is scalability.
   Resources are allocated on a per-class basis and the amount of state
   information is proportional to the number of classes rather than to
   the number of application flows.

   It should be obvious from the previous discussion that the Diffserv
   model essentially deals with traffic management issues on a per hop
   basis.  The Diffserv control model consists of a collection of
   micro-TE control mechanisms.  Other traffic engineering capabilities,
   such as capacity management (including routing control), are also
   required in order to deliver acceptable service quality in Diffserv
   networks.  The concept of Per Domain Behaviors has been introduced to
   better capture the notion of differentiated services across a
   complete domain [RFC-3086].

4.5.4 MPLS

   MPLS is an advanced forwarding scheme which also includes extensions
   to conventional IP control plane protocols.  MPLS extends the
   Internet routing model and enhances packet forwarding and path
   control [RFC-3031].

Top      Up      ToC       Page 36 
   At the ingress to an MPLS domain, label switching routers (LSRs)
   classify IP packets into forwarding equivalence classes (FECs) based
   on a variety of factors, including, e.g., a combination of the
   information carried in the IP header of the packets and the local
   routing information maintained by the LSRs.  An MPLS label is then
   prepended to each packet according to their forwarding equivalence
   classes.  In a non-ATM/FR environment, the label is 32 bits long and
   contains a 20-bit label field, a 3-bit experimental field (formerly
   known as Class-of-Service or CoS field), a 1-bit label stack
   indicator and an 8-bit TTL field.  In an ATM (FR) environment, the
   label consists of information encoded in the VCI/VPI (DLCI) field.
   An MPLS capable router (an LSR) examines the label and possibly the
   experimental field and uses this information to make packet
   forwarding decisions.

   An LSR makes forwarding decisions by using the label prepended to
   packets as the index into a local next hop label forwarding entry
   (NHLFE).  The packet is then processed as specified in the NHLFE.
   The incoming label may be replaced by an outgoing label, and the
   packet may be switched to the next LSR.  This label-switching process
   is very similar to the label (VCI/VPI) swapping process in ATM
   networks.  Before a packet leaves an MPLS domain, its MPLS label may
   be removed.  A Label Switched Path (LSP) is the path between an
   ingress LSRs and an egress LSRs through which a labeled packet
   traverses.  The path of an explicit LSP is defined at the originating
   (ingress) node of the LSP.  MPLS can use a signaling protocol such as
   RSVP or LDP to set up LSPs.

   MPLS is a very powerful technology for Internet traffic engineering
   because it supports explicit LSPs which allow constraint-based
   routing to be implemented efficiently in IP networks [AWD2].  The
   requirements for traffic engineering over MPLS are described in
   [RFC-2702].  Extensions to RSVP to support instantiation of explicit
   LSP are discussed in [RFC-3209].  Extensions to LDP, known as CR-LDP,
   to support explicit LSPs are presented in [JAM].

4.5.5 IP Performance Metrics

   The IETF IP Performance Metrics (IPPM) working group has been
   developing a set of standard metrics that can be used to monitor the
   quality, performance, and reliability of Internet services.  These
   metrics can be applied by network operators, end-users, and
   independent testing groups to provide users and service providers
   with a common understanding of the performance and reliability of the
   Internet component 'clouds' they use/provide [RFC-2330].  The
   criteria for performance metrics developed by the IPPM WG are
   described in [RFC-2330].  Examples of performance metrics include
   one-way packet

Top      Up      ToC       Page 37 
   loss [RFC-2680], one-way delay [RFC-2679], and connectivity measures
   between two nodes [RFC-2678].  Other metrics include second-order
   measures of packet loss and delay.

   Some of the performance metrics specified by the IPPM WG are useful
   for specifying Service Level Agreements (SLAs).  SLAs are sets of
   service level objectives negotiated between users and service
   providers, wherein each objective is a combination of one or more
   performance metrics, possibly subject to certain constraints.

4.5.6 Flow Measurement

   The IETF Real Time Flow Measurement (RTFM) working group has produced
   an architecture document defining a method to specify traffic flows
   as well as a number of components for flow measurement (meters, meter
   readers, manager) [RFC-2722].  A flow measurement system enables
   network traffic flows to be measured and analyzed at the flow level
   for a variety of purposes.  As noted in RFC 2722, a flow measurement
   system can be very useful in the following contexts: (1)
   understanding the behavior of existing networks, (2) planning for
   network development and expansion, (3) quantification of network
   performance, (4) verifying the quality of network service, and (5)
   attribution of network usage to users.

   A flow measurement system consists of meters, meter readers, and
   managers.  A meter observes packets passing through a measurement
   point, classifies them into certain groups, accumulates certain usage
   data (such as the number of packets and bytes for each group), and
   stores the usage data in a flow table.  A group may represent a user
   application, a host, a network, a group of networks, etc.  A meter
   reader gathers usage data from various meters so it can be made
   available for analysis.  A manager is responsible for configuring and
   controlling meters and meter readers.  The instructions received by a
   meter from a manager include flow specification, meter control
   parameters, and sampling techniques.  The instructions received by a
   meter reader from a manager include the address of the meter whose
   date is to be collected, the frequency of data collection, and the
   types of flows to be collected.

4.5.7 Endpoint Congestion Management

   [RFC-3124] is intended to provide a set of congestion control
   mechanisms that transport protocols can use.  It is also intended to
   develop mechanisms for unifying congestion control across a subset of
   an endpoint's active unicast connections (called a congestion group).
   A congestion manager continuously monitors the state of the path for

Top      Up      ToC       Page 38 
   each congestion group under its control.  The manager uses that
   information to instruct a scheduler on how to partition bandwidth
   among the connections of that congestion group.

4.6 Overview of ITU Activities Related to Traffic Engineering

   This section provides an overview of prior work within the ITU-T
   pertaining to traffic engineering in traditional telecommunications

   ITU-T Recommendations E.600 [ITU-E600], E.701 [ITU-E701], and E.801
   [ITU-E801] address traffic engineering issues in traditional
   telecommunications networks.  Recommendation E.600 provides a
   vocabulary for describing traffic engineering concepts, while E.701
   defines reference connections, Grade of Service (GOS), and traffic
   parameters for ISDN.  Recommendation E.701 uses the concept of a
   reference connection to identify representative cases of different
   types of connections without describing the specifics of their actual
   realizations by different physical means.  As defined in
   Recommendation E.600, "a connection is an association of resources
   providing means for communication between two or more devices in, or
   attached to, a telecommunication network."  Also, E.600 defines "a
   resource as any set of physically or conceptually identifiable
   entities within a telecommunication network, the use of which can be
   unambiguously determined" [ITU-E600].  There can be different types
   of connections as the number and types of resources in a connection
   may vary.

   Typically, different network segments are involved in the path of a
   connection.  For example, a connection may be local, national, or
   international.  The purposes of reference connections are to clarify
   and specify traffic performance issues at various interfaces between
   different network domains.  Each domain may consist of one or more
   service provider networks.

   Reference connections provide a basis to define grade of service
   (GoS) parameters related to traffic engineering within the ITU-T
   framework.  As defined in E.600, "GoS refers to a number of traffic
   engineering variables which are used to provide a measure of the
   adequacy of a group of resources under specified conditions."  These
   GoS variables may be probability of loss, dial tone, delay, etc.
   They are essential for network internal design and operation as well
   as for component performance specification.

   GoS is different from quality of service (QoS) in the ITU framework.
   QoS is the performance perceivable by a telecommunication service
   user and expresses the user's degree of satisfaction of the service.
   QoS parameters focus on performance aspects observable at the service

Top      Up      ToC       Page 39 
   access points and network interfaces, rather than their causes within
   the network.  GoS, on the other hand, is a set of network oriented
   measures which characterize the adequacy of a group of resources
   under specified conditions.  For a network to be effective in serving
   its users, the values of both GoS and QoS parameters must be related,
   with GoS parameters typically making a major contribution to the QoS.

   Recommendation E.600 stipulates that a set of GoS parameters must be
   selected and defined on an end-to-end basis for each major service
   category provided by a network to assist the network provider with
   improving efficiency and effectiveness of the network.  Based on a
   selected set of reference connections, suitable target values are
   assigned to the selected GoS parameters under normal and high load
   conditions.  These end-to-end GoS target values are then apportioned
   to individual resource components of the reference connections for
   dimensioning purposes.

4.7 Content Distribution

   The Internet is dominated by client-server interactions, especially
   Web traffic (in the future, more sophisticated media servers may
   become dominant).  The location and performance of major information
   servers has a significant impact on the traffic patterns within the
   Internet as well as on the perception of service quality by end

   A number of dynamic load balancing techniques have been devised to
   improve the performance of replicated information servers.  These
   techniques can cause spatial traffic characteristics to become more
   dynamic in the Internet because information servers can be
   dynamically picked based upon the location of the clients, the
   location of the servers, the relative utilization of the servers, the
   relative performance of different networks, and the relative
   performance of different parts of a network.  This process of
   assignment of distributed servers to clients is called Traffic
   Directing.  It functions at the application layer.

   Traffic Directing schemes that allocate servers in multiple
   geographically dispersed locations to clients may require empirical
   network performance statistics to make more effective decisions.  In
   the future, network measurement systems may need to provide this type
   of information.  The exact parameters needed are not yet defined.

   When congestion exists in the network, Traffic Directing and Traffic
   Engineering systems should act in a coordinated manner.  This topic
   is for further study.

Top      Up      ToC       Page 40 
   The issues related to location and replication of information
   servers, particularly web servers, are important for Internet traffic
   engineering because these servers contribute a substantial proportion
   of Internet traffic.

5.0 Taxonomy of Traffic Engineering Systems

   This section presents a short taxonomy of traffic engineering
   systems.  A taxonomy of traffic engineering systems can be
   constructed based on traffic engineering styles and views as listed

      - Time-dependent vs State-dependent vs Event-dependent
      - Offline vs Online
      - Centralized vs Distributed
      - Local vs Global Information
      - Prescriptive vs Descriptive
      - Open Loop vs Closed Loop
      - Tactical vs Strategic

   These classification systems are described in greater detail in the
   following subsections of this document.

5.1 Time-Dependent Versus State-Dependent Versus Event Dependent

   Traffic engineering methodologies can be classified as time-
   dependent, or state-dependent, or event-dependent.  All TE schemes
   are considered to be dynamic in this document.  Static TE implies
   that no traffic engineering methodology or algorithm is being

   In the time-dependent TE, historical information based on periodic
   variations in traffic, (such as time of day), is used to pre-program
   routing plans and other TE control mechanisms.  Additionally,
   customer subscription or traffic projection may be used.  Pre-
   programmed routing plans typically change on a relatively long time
   scale (e.g., diurnal).  Time-dependent algorithms do not attempt to
   adapt to random variations in traffic or changing network conditions.
   An example of a time-dependent algorithm is a global centralized
   optimizer where the input to the system is a traffic matrix and
   multi-class QoS requirements as described [MR99].

   State-dependent TE adapts the routing plans for packets based on the
   current state of the network.  The current state of the network
   provides additional information on variations in actual traffic
   (i.e., perturbations from regular variations) that could not be
   predicted using historical information.  Constraint-based routing is

Top      Up      ToC       Page 41 
   an example of state-dependent TE operating in a relatively long time
   scale.  An example operating in a relatively short time scale is a
   load-balancing algorithm described in [MATE].

   The state of the network can be based on parameters such as
   utilization, packet delay, packet loss, etc.  These parameters can be
   obtained in several ways.  For example, each router may flood these
   parameters periodically or by means of some kind of trigger to other
   routers.  Another approach is for a particular router performing
   adaptive TE to send probe packets along a path to gather the state of
   that path.  Still another approach is for a management system to
   gather relevant information from network elements.

   Expeditious and accurate gathering and distribution of state
   information is critical for adaptive TE due to the dynamic nature of
   network conditions.  State-dependent algorithms may be applied to
   increase network efficiency and resilience.  Time-dependent
   algorithms are more suitable for predictable traffic variations.  On
   the other hand, state-dependent algorithms are more suitable for
   adapting to the prevailing network state.

   Event-dependent TE methods can also be used for TE path selection.
   Event-dependent TE methods are distinct from time-dependent and
   state-dependent TE methods in the manner in which paths are selected.
   These algorithms are adaptive and distributed in nature and typically
   use learning models to find good paths for TE in a network.  While
   state-dependent TE models typically use available-link-bandwidth
   (ALB) flooding for TE path selection, event-dependent TE methods do
   not require ALB flooding.  Rather, event-dependent TE methods
   typically search out capacity by learning models, as in the success-
   to-the-top (STT) method.  ALB flooding can be resource intensive,
   since it requires link bandwidth to carry LSAs, processor capacity to
   process LSAs, and the overhead can limit area/autonomous system (AS)
   size.  Modeling results suggest that event-dependent TE methods could
   lead to a reduction in ALB flooding overhead without loss of network
   throughput performance [ASH3].

5.2 Offline Versus Online

   Traffic engineering requires the computation of routing plans.  The
   computation may be performed offline or online.  The computation can
   be done offline for scenarios where routing plans need not be
   executed in real-time.  For example, routing plans computed from
   forecast information may be computed offline.  Typically, offline
   computation is also used to perform extensive searches on multi-
   dimensional solution spaces.

Top      Up      ToC       Page 42 
   Online computation is required when the routing plans must adapt to
   changing network conditions as in state-dependent algorithms.  Unlike
   offline computation (which can be computationally demanding), online
   computation is geared toward relative simple and fast calculations to
   select routes, fine-tune the allocations of resources, and perform
   load balancing.

5.3 Centralized Versus Distributed

   Centralized control has a central authority which determines routing
   plans and perhaps other TE control parameters on behalf of each
   router.  The central authority collects the network-state information
   from all routers periodically and returns the routing information to
   the routers.  The routing update cycle is a critical parameter
   directly impacting the performance of the network being controlled.
   Centralized control may need high processing power and high bandwidth
   control channels.

   Distributed control determines route selection by each router
   autonomously based on the routers view of the state of the network.
   The network state information may be obtained by the router using a
   probing method or distributed by other routers on a periodic basis
   using link state advertisements.  Network state information may also
   be disseminated under exceptional conditions.

5.4 Local Versus Global

   Traffic engineering algorithms may require local or global network-
   state information.

   Local information pertains to the state of a portion of the domain.
   Examples include the bandwidth and packet loss rate of a particular
   path.  Local state information may be sufficient for certain
   instances of distributed-controlled TEs.

   Global information pertains to the state of the entire domain
   undergoing traffic engineering.  Examples include a global traffic
   matrix and loading information on each link throughout the domain of
   interest.  Global state information is typically required with
   centralized control.  Distributed TE systems may also need global
   information in some cases.

5.5 Prescriptive Versus Descriptive

   TE systems may also be classified as prescriptive or descriptive.

Top      Up      ToC       Page 43 
   Prescriptive traffic engineering evaluates alternatives and
   recommends a course of action.  Prescriptive traffic engineering can
   be further categorized as either corrective or perfective.
   Corrective TE prescribes a course of action to address an existing or
   predicted anomaly.  Perfective TE prescribes a course of action to
   evolve and improve network performance even when no anomalies are

   Descriptive traffic engineering, on the other hand, characterizes the
   state of the network and assesses the impact of various policies
   without recommending any particular course of action.

5.6 Open-Loop Versus Closed-Loop

   Open-loop traffic engineering control is where control action does
   not use feedback information from the current network state.  The
   control action may use its own local information for accounting
   purposes, however.

   Closed-loop traffic engineering control is where control action
   utilizes feedback information from the network state.  The feedback
   information may be in the form of historical information or current

5.7 Tactical vs Strategic

   Tactical traffic engineering aims to address specific performance
   problems (such as hot-spots) that occur in the network from a
   tactical perspective, without consideration of overall strategic
   imperatives.  Without proper planning and insights, tactical TE tends
   to be ad hoc in nature.

   Strategic traffic engineering approaches the TE problem from a more
   organized and systematic perspective, taking into consideration the
   immediate and longer term consequences of specific policies and

(page 43 continued on part 3)

Next RFC Part