7. Application of the Architecture to the User-Network Interface
The User-Network Interface (UNI) is an important architectural
concept in many implementations and deployments of client-server
networks, especially those where the client and server network have
different technologies. The UNI is described in [G.8080], and the
GMPLS approach to the UNI is documented in [RFC4208]. Other
GMPLS-related documents describe the application of GMPLS to specific
UNI scenarios: for example, [RFC6005] describes how GMPLS can support
a UNI that provides access to Ethernet services.
Figure 1 of [RFC6005] is reproduced here as Figure 22. It shows the
Ethernet UNI reference model, and that figure can serve as an example
for all similar UNIs. In this case, the UNI is an interface between
client network edge nodes and the server network. It should be noted
that neither the client network nor the server network need be an
Ethernet switching network.
There are three network layers in this model: the client network, the
"Ethernet service network", and the server network. The so-called
Ethernet service network consists of links comprising the UNI links
and the tunnels across the server network, and nodes comprising the
client network edge nodes and various server network nodes. That is,
the Ethernet service network is equivalent to the abstraction layer
network, with the UNI links being the physical links between the
client and server networks, the client edge nodes taking the role of
UNI Client-side (UNI-C) nodes, and the server edge nodes acting as
the UNI Network-side (UNI-N) nodes.
Network +----------+ +-----------+ Network
-------------+ | | | | +-------------
+----+ | | +-----+ | | +-----+ | | +----+
------+ | | | | | | | | | | | | +------
------+ EN +-+-----+--+ CN +-+----+--+ CN +--+-----+-+ EN +------
| | | +--+--| +-+-+ | | +--+-----+-+ |
+----+ | | | +--+--+ | | | +--+--+ | | +----+
| | | | | | | | | |
-------------+ | | | | | | | | +-------------
| | | | | | | |
-------------+ | | | | | | | | +-------------
| | | +--+--+ | | | +--+--+ | |
+----+ | | | | | | +--+--+ | | | +----+
------+ +-+--+ | | CN +-+----+--+ CN | | | | +------
------+ EN +-+-----+--+ | | | | +--+-----+-+ EN +------
| | | | +-----+ | | +-----+ | | | |
+----+ | | | | | | +----+
| +----------+ |-----------+ |
-------------+ Server Networks +-------------
Client UNI UNI Client
Network <-----> <-----> Network
Scope of This Document
Legend: EN - Client Network Edge Node
CN - Server Network (Core) Node
Figure 22: Ethernet UNI Reference Model
An issue that is often raised relates to how a dual-homed client
network edge node (such as that shown at the bottom left-hand corner
of Figure 22) can make determinations about how they connect across
the UNI. This can be particularly important when reachability across
the server network is limited or when two diverse paths are desired
(for example, to provide protection). However, in the model
described in this network, the edge node (the UNI-C node) is part of
the abstraction layer network and can see sufficient topology
information to make these decisions. If the approach introduced in
this document is used to model the UNI as described in this section,
there is no need to enhance the signaling protocols at the GMPLS UNI
nor to add routing exchanges at the UNI.
8. Application of the Architecture to L3VPN Multi-AS Environments
Serving Layer 3 VPNs (L3VPNs) across a multi-AS or multi-operator
environment currently provides a significant planning challenge.
Figure 6 shows the general case of the problem that needs to be
solved. This section shows how the abstraction layer network can
address this problem.
In the VPN architecture, the CE nodes are the client network edge
nodes, and the PE nodes are the server network edge nodes. The
abstraction layer network is made up of the CE nodes, the CE-PE
links, the PE nodes, and PE-PE tunnels that are the abstract links.
In the multi-AS or multi-operator case, the abstraction layer network
also includes the PEs (maybe Autonomous System Border Routers
(ASBRs)) at the edges of the multiple server networks, and the PE-PE
(maybe inter-AS) links. This gives rise to the architecture shown in
The policy for adding abstract links to the abstraction layer network
will be driven substantially by the needs of the VPN. Thus, when a
new VPN site is added and the existing abstraction layer network
cannot support the required connectivity, a new abstract link will be
created out of the underlying network.
VPN Site : : VPN Site
-- -- : : -- --
|C1|-|CE| : : |CE|-|C2|
-- | | : : | | --
| | : : | |
| | : : | |
| | : : | |
| | : -- -- -- -- : | |
| |----|PE|=========|PE|---|PE|=====|PE|----| |
-- : | | | | | | | | : --
........... | | | | | | | | ............
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | - - | | | | - | |
| |-|P|-|P|-| | | |-|P|-| |
-- - - -- -- - --
Figure 23: The Abstraction Layer Network for a Multi-AS VPN
It is important to note that each VPN instance can have a separate
abstraction layer network. This means that the server network
resources can be partitioned and that traffic can be kept separate.
This can be achieved even when VPN sites from different VPNs connect
at the same PE. Alternatively, multiple VPNs can share the same
abstraction layer network if that is operationally preferable.
Lastly, just as for the UNI discussed in Section 7, the issue of
dual-homing of VPN sites is a function of the abstraction layer
network and so is just a normal routing problem in that network.
9. Scoping Future Work
This section is provided to help guide the work on this problem. The
overarching view is that it is important to limit and focus the work
on those things that are core and necessary to achieve the main
function, and to not attempt to add unnecessary features or to
over-complicate the architecture or the solution by attempting to
address marginal use cases or corner cases. This guidance is
non-normative for this architecture description.
9.1. Limiting Scope to Only Part of the Internet
The scope of the use cases and problem statement in this document is
limited to "some small set of interconnected domains." In
particular, it is not the objective of this work to turn the whole
Internet into one large, interconnected TE network.
9.2. Working with "Related" Domains
Starting with this subsection, the intention of this work is to solve
the TE interconnectivity for only "related" domains. Such domains
may be under common administrative operation (such as IGP areas
within a single AS, or ASes belonging to a single operator) or may
have a direct commercial arrangement for the sharing of TE
information to provide specific services. Thus, in both cases, there
is a strong opportunity for the application of policy.
9.3. Not Finding Optimal Paths in All Situations
As has been well described in this document, abstraction necessarily
involves compromises and removal of information. That means that it
is not possible to guarantee that an end-to-end path over
interconnected TE domains follows the absolute optimal (by any
measure of optimality) path. This is taken as understood, and future
work should not attempt to achieve such paths, which can only be
found by a full examination of all network information across all
9.4. Sanity and Scaling
All of the above points play into a final observation. This work is
intended to "bite off" a small problem for some relatively simple use
cases as described in Section 2. It is not intended that this work
will be immediately (or even soon) extended to cover many large
interconnected domains. Obviously, the solution should, as far as
possible, be designed to be extensible and scalable; however, it is
also reasonable to make trade-offs in favor of utility and
10. Manageability Considerations
Manageability should not be a significant additional burden. Each
layer in the network model can, and should, be managed independently.
That is, each client network will run its own management systems and
tools to manage the nodes and links in the client network: each
client network link that uses an abstract link will still be
available for management in the client network as any other link.
Similarly, each server network will run its own management systems
and tools to manage the nodes and links in that network just as
Three issues remain for consideration:
- How is the abstraction layer network managed?
- How is the interface between the client network and the
abstraction layer network managed?
- How is the interface between the abstraction layer network and the
server network managed?
10.1. Managing the Abstraction Layer Network
Management of the abstraction layer network differs from the client
and server networks because not all of the links that are visible in
the TED are real links. That is, it is not possible to run
Operations, Administration, and Maintenance (OAM) on the links that
constitute the potential of a link.
Other than that, however, the management of the abstraction layer
network should be essentially the same. Routing and signaling
protocols can be run in the abstraction layer (using out-of-band
channels for links that have not yet been established), and a
centralized TED can be constructed and used to examine the
availability and status of the links and nodes in the network.
Note that different deployment models will place the "ownership" of
the abstraction layer network differently. In some cases, the
abstraction layer network will be constructed by the operator of the
server network and run by that operator as a service for one or more
client networks. In other cases, one or more server networks will
present the potential of links to an abstraction layer network run by
the operator of the client network. And it is feasible that a
business model could be built where a third-party operator manages
the abstraction layer network, constructing it from the connectivity
available in multiple server networks and facilitating connectivity
for multiple client networks.
10.2. Managing Interactions of Abstraction Layer and Client Networks
The interaction between the client network and the abstraction layer
network is a management task. It might be automated (software
driven), or it might require manual intervention.
This is a two-way interaction:
- The client network can express the need for additional
connectivity. For example, the client network may try, and fail,
to find a path across the client network and may request
additional, specific connectivity (this is similar to the
situation with the Virtual Network Topology Manager (VNTM)
[RFC5623]). Alternatively, a more proactive client network
management system may monitor traffic demands (current and
predicted), network usage, and network "hot spots" and may request
changes in connectivity by both releasing unused links and
requesting new links.
- The abstraction layer network can make links available to the
client network or can withdraw them. These actions can be in
response to requests from the client network or can be driven by
processes within the abstraction layer (perhaps reorganizing the
use of server network resources). In any case, the presentation
of new links to the client network is heavily subject to policy,
since this is both operationally key to the success of this
architecture and the central plank of the commercial model
described in this document. Such policies belong to the operator
of the abstraction layer network and are expected to be fully
Once the abstraction layer network has decided to make a link
available to the client network, it will install it at the link
end points (which are nodes in the client network) such that it
appears and can be advertised as a link in the client network.
In all cases, it is important that the operators of both networks are
able to track the requests and responses, and the operator of the
client network should be able to see which links in that network are
"real" physical links and which links are presented by the
abstraction layer network.
10.3. Managing Interactions of Abstraction Layer and Server Networks
The interactions between the abstraction layer network and the server
network are similar to those described in Section 10.2, but there is
a difference in that the server network is more likely to offer up
connectivity and the abstraction layer network is less likely to ask
That is, the server network will, according to policy that may
include commercial relationships, offer the abstraction layer network
a "set" of potential connectivity that the abstraction layer network
can treat as links. This server network policy will include:
- how much connectivity to offer
- what level of server network redundancy to include
- how to support the use of the abstract links
This process of offering links from the server network may include a
mechanism to indicate which links have been pre-established in the
server network and can include other properties, such as:
- link-level protection [RFC4202]
- SRLGs and MSRLGs (see Appendix B.1)
- mutual exclusivity (see Appendix B.2)
The abstraction layer network needs a mechanism to tell the server
network which links it is using. This mechanism could also include
the ability to request additional connectivity from the server
network, although it seems most likely that the server network will
already have presented as much connectivity as it is physically
capable of, subject to the constraints of policy.
Finally, the server network will need to confirm the establishment of
connectivity, withdraw links if they are no longer feasible, and
Again, it is important that the operators of both networks are able
to track the requests and responses, and the operator of the server
network should be able to see which links are in use.
11. Security Considerations
Security of signaling and routing protocols is usually administered
and achieved within the boundaries of a domain. Thus, and for
example, a domain with a GMPLS control plane [RFC3945] would apply
the security mechanisms and considerations that are appropriate to
GMPLS [RFC5920]. Furthermore, domain-based security relies strongly
on ensuring that control-plane messages are not allowed to enter the
domain from outside.
In this context, additional security considerations arising from this
document relate to the exchange of control-plane information between
domains. Messages are passed between domains using control-plane
protocols operating between peers that have predictable relationships
(for example, UNI-C to UNI-N, between BGP-LS speakers, or between
peer domains). Thus, the security that needs to be given additional
attention for inter-domain TE concentrates on authentication of
peers; assertion that messages have not been tampered with; and, to a
lesser extent, protecting the content of the messages from
inspection, since that might give away sensitive information about
the networks. The protocols described in Appendix A, which are
likely to provide the foundation for solutions to this architecture,
already include such protection and also can be run over protected
transports such as IPsec [RFC6071], Transport Layer Security (TLS)
[RFC5246], and the TCP Authentication Option (TCP-AO) [RFC5925].
It is worth noting that the control plane of the abstraction layer
network is likely to be out of band. That is, control-plane messages
will be exchanged over network links that are not the links to which
they apply. This models the facilities of GMPLS (but not of
MPLS-TE), and the security mechanisms can be applied to the protocols
operating in the out-of-band network.
12. Informative References
[G.8080] International Telecommunication Union, "Architecture for
the automatically switched optical network", ITU-T
Recommendation G.8080/Y.1304, February 2012,
Bryskin, I., Ed., Doonan, W., Beeram, V., Ed., Drake, J.,
Ed., Grammel, G., Paul, M., Kunze, R., Armbruster, F.,
Margaria, C., Gonzalez de Dios, O., and D. Ceccarelli,
"Generalized Multiprotocol Label Switching (GMPLS)
External Network Network Interface (E-NNI): Virtual Link
Enhancements for the Overlay Model", Work in Progress,
draft-beeram-ccamp-gmpls-enni-03, September 2013.
[RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J.
McManus, "Requirements for Traffic Engineering Over MPLS",
RFC 2702, DOI 10.17487/RFC2702, September 1999,
[RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
[RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label
Switching (GMPLS) Signaling Resource ReserVation
Protocol-Traffic Engineering (RSVP-TE) Extensions",
RFC 3473, DOI 10.17487/RFC3473, January 2003,
[RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering
(TE) Extensions to OSPF Version 2", RFC 3630,
DOI 10.17487/RFC3630, September 2003,
Appendix A. Existing Work
This appendix briefly summarizes relevant existing work that is used
to route TE paths across multiple domains. It is non-normative.
A.1. Per-Domain Path Computation
The mechanism for per-domain path establishment is described in
[RFC5152], and its applicability is discussed in [RFC4726]. In
summary, this mechanism assumes that each domain entry point is
responsible for computing the path across the domain but that details
regarding the path in the next domain are left to the next domain
entry point. The computation may be performed directly by the entry
point or may be delegated to a computation server.
This basic mode of operation can run into many of the issues
described alongside the use cases in Section 2. However, in practice
it can be used effectively, with a little operational guidance.
For example, RSVP-TE [RFC3209] includes the concept of a "loose hop"
in the explicit path that is signaled. This allows the original
request for an LSP to list the domains or even domain entry points to
include on the path. Thus, in the example in Figure 1, the source
can be told to use interconnection x2. Then, the source computes the
path from itself to x2 and initiates the signaling. When the
signaling message reaches Domain Z, the entry point to the domain
computes the remaining path to the destination and continues the
Another alternative suggested in [RFC5152] is to make TE routing
attempt to follow inter-domain IP routing. Thus, in the example
shown in Figure 2, the source would examine the BGP routing
information to determine the correct interconnection point for
forwarding IP packets and would use that to compute and then signal a
path for Domain A. Each domain in turn would apply the same approach
so that the path is progressively computed and signaled domain by
Although the per-domain approach has many issues and drawbacks in
terms of achieving optimal (or, indeed, any) paths, it has been the
mainstay of inter-domain LSP setup to date.
Crankback addresses one of the main issues with per-domain path
computation: What happens when an initial path is selected that
cannot be completed toward the destination? For example, what
happens if, in Figure 2, the source attempts to route the path
through interconnection x2 but Domain C does not have the right TE
resources or connectivity to route the path further?
Crankback for MPLS-TE and GMPLS networks is described in [RFC4920]
and is based on a concept similar to the Acceptable Label Set
mechanism described for GMPLS signaling in [RFC3473]. When a node
(i.e., a domain entry point) is unable to compute a path further
across the domain, it returns an error message in the signaling
protocol that states where the blockage occurred (link identifier,
node identifier, domain identifier, etc.) and gives some clues about
what caused the blockage (bad choice of label, insufficient bandwidth
available, etc.). This information allows a previous computation
point to select an alternative path, or to aggregate crankback
information and return it upstream to a previous computation point.
Crankback is a very powerful mechanism and can be used to find an
end-to-end path in a multi-domain network if one exists.
On the other hand, crankback can be quite resource-intensive, as
signaling messages and path setup attempts may "wander around" in the
network, attempting to find the correct path for a long time. Since
(1) RSVP-TE signaling ties up network resources for partially
established LSPs, (2) network conditions may be in flux, and (3) most
particularly, LSP setup within well-known time limits is highly
desirable, crankback is not a popular mechanism.
Furthermore, even if crankback can always find an end-to-end path, it
does not guarantee that the optimal path will be found. (Note that
there have been some academic proposals to use signaling-like
techniques to explore the whole network in order to find optimal
paths, but these tend to place even greater burdens on network
A.3. Path Computation Element
The Path Computation Element (PCE) is introduced in [RFC4655]. It is
an abstract functional entity that computes paths. Thus, in the
example of per-domain path computation (see Appendix A.1), both the
source node and each domain entry point are PCEs. On the other hand,
the PCE can also be realized as a separate network element (a server)
to which computation requests can be sent using the Path Computation
Element Communication Protocol (PCEP) [RFC5440].
Each PCE is responsible for computations within a domain and has
visibility of the attributes within that domain. This immediately
enables per-domain path computation with the opportunity to offload
complex, CPU-intensive, or memory-intensive computation functions
from routers in the network. But the use of PCEs in this way
does not solve any of the problems articulated in Appendices A.1
Two significant mechanisms for cooperation between PCEs have been
described. These mechanisms are intended to specifically address the
problems of computing optimal end-to-end paths in multi-domain
- The Backward-Recursive PCE-Based Computation (BRPC) mechanism
[RFC5441] involves cooperation between the set of PCEs along the
inter-domain path. Each one computes the possible paths from the
domain entry point (or source node) to the domain exit point (or
destination node) and shares the information with its upstream
neighbor PCE, which is able to build a tree of possible paths
rooted at the destination. The PCE in the source domain can
select the optimal path.
BRPC is sometimes described as "crankback at computation time".
It is capable of determining the optimal path in a multi-domain
network but depends on knowing the domain that contains the
destination node. Furthermore, the mechanism can become quite
complicated and can involve a lot of data in a mesh of
interconnected domains. Thus, BRPC is most often proposed for a
simple mesh of domains and specifically for a path that will cross
a known sequence of domains, but where there may be a choice of
domain interconnections. In this way, BRPC would only be applied
to Figure 2 if a decision had been made (externally) to traverse
Domain C rather than Domain D (notwithstanding that it could
functionally be used to make that choice itself), but BRPC could
be used very effectively to select between interconnections x1 and
x2 in Figure 1.
- The Hierarchical PCE (H-PCE) [RFC6805] mechanism offers a parent
PCE that is responsible for navigating a path across the domain
mesh and for coordinating intra-domain computations by the child
PCEs responsible for each domain. This approach makes computing
an end-to-end path across a mesh of domains far more tractable.
However, it still leaves unanswered the issue of determining the
location of the destination (i.e., discovering the destination
domain) as described in Section 2.1. Furthermore, it raises the
question of who operates the parent PCE, especially in networks
where the domains are under different administrative and
It should also be noted that [RFC5623] discusses how PCEs are used in
a multi-layer network with coordination between PCEs operating at
each network layer. Further issues and considerations regarding the
use of PCEs can be found in [RFC7399].
A.4. GMPLS UNI and Overlay Networks
[RFC4208] defines the GMPLS User-Network Interface (UNI) to present a
routing boundary between an overlay (client) network and the server
network, i.e., the client-server interface. In the client network,
the nodes connected directly to the server network are known as edge
nodes, while the nodes in the server network are called core nodes.
In the overlay model defined by [RFC4208], the core nodes act as a
closed system and the edge nodes do not participate in the routing
protocol instance that runs among the core nodes. Thus, the UNI
allows access to, and limited control of, the core nodes by edge
nodes that are unaware of the topology of the core nodes. This
respects the operational and layer boundaries while scaling the
[RFC4208] does not define any routing protocol extension for the
interaction between core and edge nodes but allows for the exchange
of reachability information between them. In terms of a VPN, the
client network can be considered as the customer network comprised of
a number of disjoint sites, and the edge nodes match the VPN CE
nodes. Similarly, the provider network in the VPN model is
equivalent to the server network.
[RFC4208] is, therefore, a signaling-only solution that allows edge
nodes to request connectivity across the server network and leaves
the server network to select the paths for the LSPs as they traverse
the core nodes (setting up hierarchical LSPs if necessitated by the
technology). This solution is supplemented by a number of signaling
extensions, such as [RFC4874], [RFC5553], [RSVP-TE-EXCL],
[RSVP-TE-EXT], and [RSVP-TE-METRIC], to give the edge node more
control over the path within the server network and by allowing the
edge nodes to supply additional constraints on the path used in the
server network. Nevertheless, in this UNI/overlay model, the edge
node has limited information regarding precisely what LSPs could be
set up across the server network and what TE services (diverse routes
for end-to-end protection, end-to-end bandwidth, etc.) can be
A.5. Layer 1 VPN
A Layer 1 VPN (L1VPN) is a service offered by a Layer 1 server
network to provide Layer 1 connectivity (Time-Division Multiplexing
(TDM), Lambda Switch Capable (LSC)) between two or more customer
networks in an overlay service model [RFC4847].
As in the UNI case, the customer edge has some control over the
establishment and type of connectivity. In the L1VPN context, three
different service models have been defined, classified by the
semantics of information exchanged over the customer interface: the
management-based model, the signaling-based (a.k.a. basic) service
model, and the signaling and routing (a.k.a. enhanced) service model.
In the management-based model, all edge-to-edge connections are
set up using configuration and management tools. This is not a
dynamic control-plane solution and need not concern us here.
In the signaling-based (basic) service model [RFC5251], the CE-PE
interface allows only for signaling message exchange, and the
provider network does not export any routing information about the
server network. VPN membership is known a priori (presumably through
configuration) or is discovered using a routing protocol [RFC5195]
[RFC5252] [RFC5523], as is the relationship between CE nodes and
ports on the PE. This service model is much in line with GMPLS UNI
as defined in [RFC4208].
In the signaling and routing (enhanced) service model, there is an
additional limited exchange of routing information over the CE-PE
interface between the provider network and the customer network. The
enhanced model considers four different types of service models,
namely the overlay extension, virtual node, virtual link, and per-VPN
service models. All of these represent particular cases of the TE
information aggregation and representation.
A.6. Policy and Link Advertisement
Inter-domain networking relies on policy and management input to
coordinate the allocation of resources under different administrative
control. [RFC5623] introduces a functional component called the VNTM
for this purpose.
An important companion to this function is determining how
connectivity across the abstraction layer network is made available
as a TE link in the client network. Obviously, if the connectivity
is established using management intervention, the consequent client
network TE link can also be configured manually. However, if
connectivity from client edge to client edge is achieved using
dynamic signaling, then there is need for the end points to exchange
the link properties that they should advertise within the client
network, and in the case of support for more than one client network,
it will be necessary to indicate which client network or networks can
use the link. This capability it provided in [RFC6107].
Appendix B. Additional Features
This appendix describes additional features that may be desirable and
that can be achieved within this architecture. It is non-normative.
B.1. Macro Shared Risk Link Groups
Network links often share fate with one or more other links. That
is, a scenario that may cause a link to fail could cause one or more
other links to fail. This may occur, for example, if the links are
supported by the same fiber bundle, or if some links are routed down
the same duct or in a common piece of infrastructure such as a
bridge. A common way to identify the links that may share fate is to
label them as belonging to a Shared Risk Link Group (SRLG) [RFC4202].
TE links created from LSPs in lower layers may also share fate, and
it can be hard for a client network to know about this problem
because it does not know the topology of the server network or the
path of the server network LSPs that are used to create the links in
the client network.
For example, looking at the example used in Section 4.2.3 and
considering the two abstract links S1-S3 and S1-S9, there is no way
for the client network to know whether links C2-C0 and C2-C3 share
fate. Clearly, if the client layer uses these links to provide a
link-diverse end-to-end protection scheme, it needs to know that the
links actually share a piece of network infrastructure (the server
network link S1-S2).
Per [RFC4202], an SRLG represents a shared physical network resource
upon which the normal functioning of a link depends. Multiple SRLGs
can be identified and advertised for every TE link in a network.
However, this can produce a scalability problem in a multi-layer
network that equates to advertising in the client network the server
network route of each TE link.
Macro SRLGs (MSRLGs) address this scaling problem and are a form of
abstraction performed at the same time that the abstract links are
derived. In this way, links that actually share resources in the
server network are advertised as having the same MSRLG, rather than
advertising each SRLG for each resource on each path in the server
network. This saving is possible because the abstract links are
formulated on behalf of the server network by a central management
agency that is aware of all of the link abstractions being offered.
It may be noted that a less optimal alternative path for the abstract
link S1-S9 exists in the server network (S1-S4-S7-S8-S9). It would
be possible for the client network request for C2-C0 connectivity to
also ask that the path be maximally disjoint from path C2-C3.
Although nothing can be done about the shared link C2-S1, the
abstraction layer could make a request to use link S1-S9 in a way
that is diverse from the use of link S1-S3, and this request could be
honored if the server network policy allows it.
Note that SRLGs and MSRLGs may be very hard to describe in the case
of multiple server networks because the abstraction points will not
know whether the resources in the various server layers share
B.2. Mutual Exclusivity
As noted in the discussion of Figure 13, it is possible that some
abstraction layer links cannot be used at the same time. This arises
when the potentiality of the links is indicated by the server
network, but the use of the links would actually compete for server
network resources. Referring to Figure 13, this situation would
arise when both link S1-S3 and link S7-S9 are used to carry LSPs: in
that case, link S1-S9 could no longer be used.
Such a situation need not be an issue when client-edge-to-client-edge
LSPs are set up one by one, because the use of one abstraction layer
link and the corresponding use of server network resources will cause
the server network to withdraw the availability of the other
abstraction layer links, and these will become unavailable for
further abstraction layer path computations.
Furthermore, in deployments where abstraction layer links are only
presented as available after server network LSPs have been
established to support them, the problem is unlikely to exist.
However, when the server network is constrained but chooses to
advertise the potential of multiple abstraction layer links even
though they compete for resources, and when multiple client-edge-to-
client-edge LSPs are computed simultaneously (perhaps to provide
protection services), there may be contention for server network
resources. In the case where protected abstraction layer LSPs are
being established, this situation would be avoided through the use of
SRLGs and/or MSRLGs, since the two abstraction layer links that
compete for server network resources must also fate-share across
those resources. But in the case where the multiple client-edge-to-
client-edge LSPs do not care about fate sharing, it may be necessary
to flag the mutually exclusive links in the abstraction layer TED so
that path computation can avoid accidentally attempting to utilize
two of a set of such links at the same time.
Thanks to Igor Bryskin for useful discussions in the early stages of
this work and to Gert Grammel for discussions on the extent of
aggregation in abstract nodes and links.
Thanks to Deborah Brungard, Dieter Beller, Dhruv Dhody, Vallinayakam
Somasundaram, Hannes Gredler, Stewart Bryant, Brian Carpenter, and
Hilarie Orman for review and input.
Particular thanks to Vishnu Pavan Beeram for detailed discussions and
white-board scribbling that made many of the ideas in this document
come to life.
Text in Section 4.2.3 is freely adapted from the work of Igor
Bryskin, Wes Doonan, Vishnu Pavan Beeram, John Drake, Gert Grammel,
Manuel Paul, Ruediger Kunze, Friedrich Armbruster, Cyril Margaria,
Oscar Gonzalez de Dios, and Daniele Ceccarelli in [GMPLS-ENNI], for
which the authors of this document express their thanks.