Tech-invite3GPPspaceIETF RFCsSIP
in Index   Prev   Next

RFC 8453

Framework for Abstraction and Control of TE Networks (ACTN)

Pages: 42
Part 2 of 2 – Pages 28 to 42
First   Prev   None

Top   ToC   RFC8453 - Page 28   prevText

6. Access Points and Virtual Network Access Points

In order to map identification of connections between the customer's sites and the TE networks and to scope the connectivity requested in the VNS, the CNC and the MDSC refer to the connections using the Access Point (AP) construct as shown in Figure 11. ------------- ( ) - - +---+ X ( ) Z +---+ |CE1|---+----( )---+---|CE2| +---+ | ( ) | +---+ AP1 - - AP2 ( ) ------------- Figure 11: Customer View of APs Let's take as an example a scenario shown in Figure 11. CE1 is connected to the network via a 10 Gbps link and CE2 via a 40 Gbps link. Before the creation of any VN between AP1 and AP2, the customer view can be summarized as shown in Figure 12. +----------+------------------------+ | Endpoint | Access Link Bandwidth | +-----+----------+----------+-------------+ |AP id| CE,port | MaxResBw | AvailableBw | +-----+----------+----------+-------------+ | AP1 |CE1,portX | 10 Gbps | 10 Gbps | +-----+----------+----------+-------------+ | AP2 |CE2,portZ | 40 Gbps | 40 Gbps | +-----+----------+----------+-------------+ Figure 12: AP - Customer View
Top   ToC   RFC8453 - Page 29
   On the other hand, what the operator sees is shown in Figure 13

                          -------            -------
                         (       )          (       )
                        -         -        -         -
                   W  (+---+       )      (       +---+)  Y
                -+---( |PE1| Dom.X  )----(  Dom.Y |PE2| )---+-
                 |    (+---+       )      (       +---+)    |
                 AP1    -         -        -         -     AP2
                         (       )          (       )
                          -------            -------

                    Figure 13: Operator View of the AP

   which results in a summarization as shown in Figure 14.

                            | Endpoint | Access Link Bandwidth  |
                      |AP id| PE,port  | MaxResBw | AvailableBw |
                      | AP1 |PE1,portW |  10 Gbps |   10 Gbps   |
                      | AP2 |PE2,portY |  40 Gbps |   40 Gbps   |

                       Figure 14: AP - Operator View

   A Virtual Network Access Point (VNAP) needs to be defined as binding
   between an AP and a VN.  It is used to allow different VNs to start
   from the same AP.  It also allows for traffic engineering on the
   access and/or inter-domain links (e.g., keeping track of bandwidth
   allocation).  A different VNAP is created on an AP for each VN.

   In this simple scenario, we suppose we want to create two virtual
   networks: the first with VN identifier 9 between AP1 and AP2 with
   bandwidth of 1 Gbps and the second with VN identifier 5, again
   between AP1 and AP2 and with bandwidth 2 Gbps.

   The operator view would evolve as shown in Figure 15.
Top   ToC   RFC8453 - Page 30
                           | Endpoint |  Access Link/VNAP Bw   |
                 |AP/VNAPid| PE,port  | MaxResBw | AvailableBw |
                 |AP1      |PE1,portW | 10 Gbps  |   7 Gbps    |
                 | -VNAP1.9|          |  1 Gbps  |     N.A.    |
                 | -VNAP1.5|          |  2 Gbps  |     N.A     |
                 |AP2      |PE2,portY | 4 0Gbps  |   37 Gbps   |
                 | -VNAP2.9|          |  1 Gbps  |     N.A.    |
                 | -VNAP2.5|          |  2 Gbps  |     N.A     |

         Figure 15: AP and VNAP - Operator View after VNS Creation

6.1. Dual-Homing Scenario

Often there is a dual-homing relationship between a CE and a pair of PEs. This case needs to be supported by the definition of VN, APs, and VNAPs. Suppose CE1 connected to two different PEs in the operator domain via AP1 and AP2 and that the customer needs 5 Gbps of bandwidth between CE1 and CE2. This is shown in Figure 16. ____________ AP1 ( ) AP3 -------(PE1) (PE3)------- W / ( ) \ X +---+/ ( ) \+---+ |CE1| ( ) |CE2| +---+\ ( ) /+---+ Y \ ( ) / Z -------(PE2) (PE4)------- AP2 (____________) Figure 16: Dual-Homing Scenario In this case, the customer will request a VN between AP1, AP2, and AP3 specifying a dual-homing relationship between AP1 and AP2. As a consequence, no traffic will flow between AP1 and AP2. The dual- homing relationship would then be mapped against the VNAPs (since other independent VNs might have AP1 and AP2 as endpoints). The customer view would be shown in Figure 17.
Top   ToC   RFC8453 - Page 31
                      | Endpoint |  Access Link/VNAP Bw   |
            |AP/VNAPid| CE,port  | MaxResBw | AvailableBw |Dual Homing|
            |AP1      |CE1,portW | 10 Gbps  |   5 Gbps    |           |
            | -VNAP1.9|          |  5 Gbps  |     N.A.    | VNAP2.9   |
            |AP2      |CE1,portY | 40 Gbps  |   35 Gbps   |           |
            | -VNAP2.9|          |  5 Gbps  |     N.A.    | VNAP1.9   |
            |AP3      |CE2,portX | 50 Gbps  |  45 Gbps    |           |
            | -VNAP3.9|          |  5 Gbps  |     N.A.    |   NONE    |

         Figure 17: Dual-Homing -- Customer View after VN Creation

7. Advanced ACTN Application: Multi-Destination Service

A more-advanced application of ACTN is the case of data center (DC) selection, where the customer requires the DC selection to be based on the network status; this is referred to as "Multi-Destination Service" in [ACTN-REQ]. In terms of ACTN, a CNC could request a VNS between a set of source APs and destination APs and leave it up to the network (MDSC) to decide which source and destination APs to be used to set up the VNS. The candidate list of source and destination APs is decided by a CNC (or an entity outside of ACTN) based on certain factors that are outside the scope of ACTN. Based on the AP selection as determined and returned by the network (MDSC), the CNC (or an entity outside of ACTN) should further take care of any subsequent actions such as orchestration or service setup requirements. These further actions are outside the scope of ACTN. Consider a case as shown in Figure 18, where three DCs are available, but the customer requires the DC selection to be based on the network status and the connectivity service setup between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) would select the best destination AP based on the constraints, optimization criteria, policies, etc., and set up the connectivity service (virtual network).
Top   ToC   RFC8453 - Page 32
                          -------            -------
                         (       )          (       )
                        -         -        -         -
          +---+        (           )      (           )        +----+
          |CE1|---+---(  Domain X   )----(  Domain Y   )---+---|DC-A|
          +---+   |    (           )      (           )    |   +----+
                   AP1  -         -        -         -    AP2
                         (       )          (       )
                          ---+---            ---+---
                             |                  |
                         AP3-+              AP4-+
                             |                  |
                          +----+              +----+
                          |DC-B|              |DC-C|
                          +----+              +----+

           Figure 18: Endpoint Selection Based on Network Status

7.1. Preplanned Endpoint Migration

Furthermore, in the case of DC selection, a customer could request a backup DC to be selected, such that in case of failure, another DC site could provide hot stand-by protection. As shown in Figure 19, DC-C is selected as a backup for DC-A. Thus, the VN should be set up by the MDSC to include primary connectivity between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity between AP1 (CE1) and AP4 (DC-C). ------- ------- ( ) ( ) - - __ - - +---+ ( ) ( ) +----+ |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| +---+ | ( ) ( ) | +----+ AP1 - - - - AP2 | ( ) ( ) | ---+--- ---+--- | | | | AP3-| AP4-| HOT STANDBY | | | +----+ +----+ | |DC-D| |DC-C|<------------- +----+ +----+ Figure 19: Preplanned Endpoint Migration
Top   ToC   RFC8453 - Page 33

7.2. On-the-Fly Endpoint Migration

Compared to preplanned endpoint migration, on-the-fly endpoint selection is dynamic in that the migration is not preplanned but decided based on network condition. Under this scenario, the MDSC would monitor the network (based on the VN SLA) and notify the CNC in the case where some other destination AP would be a better choice based on the network parameters. The CNC should instruct the MDSC when it is suitable to update the VN with the new AP if it is required.

8. Manageability Considerations

The objective of ACTN is to manage traffic engineered resources and provide a set of mechanisms to allow customers to request virtual connectivity across server-network resources. ACTN supports multiple customers, each with its own view of and control of a virtual network built on the server network; the network operator will need to partition (or "slice") their network resources, and manage the resources accordingly. The ACTN platform will, itself, need to support the request, response, and reservations of client- and network-layer connectivity. It will also need to provide performance monitoring and control of TE resources. The management requirements may be categorized as follows: o Management of external ACTN protocols o Management of internal ACTN interfaces/protocols o Management and monitoring of ACTN components o Configuration of policy to be applied across the ACTN system The ACTN framework and interfaces are defined to enable traffic engineering for virtual network services and connectivity services. Network operators may have other Operations, Administration, and Maintenance (OAM) tasks for service fulfillment, optimization, and assurance beyond traffic engineering. The realization of OAM beyond abstraction and control of TE networks is not discussed in this document.
Top   ToC   RFC8453 - Page 34

8.1. Policy

Policy is an important aspect of ACTN control and management. Policies are used via the components and interfaces, during deployment of the service, to ensure that the service is compliant with agreed-upon policy factors and variations (often described in SLAs); these include, but are not limited to connectivity, bandwidth, geographical transit, technology selection, security, resilience, and economic cost. Depending on the deployment of the ACTN architecture, some policies may have local or global significance. That is, certain policies may be ACTN component specific in scope, while others may have broader scope and interact with multiple ACTN components. Two examples are provided below: o A local policy might limit the number, type, size, and scheduling of virtual network services a customer may request via its CNC. This type of policy would be implemented locally on the MDSC. o A global policy might constrain certain customer types (or specific customer applications) only to use certain MDSCs and be restricted to physical network types managed by the PNCs. A global policy agent would govern these types of policies. The objective of this section is to discuss the applicability of ACTN policy: requirements, components, interfaces, and examples. This section provides an analysis and does not mandate a specific method for enforcing policy, or the type of policy agent that would be responsible for propagating policies across the ACTN components. It does highlight examples of how policy may be applied in the context of ACTN, but it is expected further discussion in an applicability or solution-specific document, will be required.

8.2. Policy Applied to the Customer Network Controller

A virtual network service for a customer application will be requested by the CNC. The request will reflect the application requirements and specific service needs, including bandwidth, traffic type and survivability. Furthermore, application access and type of virtual network service requested by the CNC, will be need adhere to specific access control policies.
Top   ToC   RFC8453 - Page 35

8.3. Policy Applied to the Multi-Domain Service Coordinator

A key objective of the MDSC is to support the customer's expression of the application connectivity request via its CNC as a set of desired business needs; therefore, policy will play an important role. Once authorized, the virtual network service will be instantiated via the CNC-MDSC Interface (CMI); it will reflect the customer application and connectivity requirements and specific service- transport needs. The CNC and the MDSC components will have agreed- upon connectivity endpoints; use of these endpoints should be defined as a policy expression when setting up or augmenting virtual network services. Ensuring that permissible endpoints are defined for CNCs and applications will require the MDSC to maintain a registry of permissible connection points for CNCs and application types. Conflicts may occur when virtual network service optimization criteria are in competition. For example, to meet objectives for service reachability, a request may require an interconnection point between multiple physical networks; however, this might break a confidentially policy requirement of a specific type of end-to-end service. Thus, an MDSC may have to balance a number of the constraints on a service request and between different requested services. It may also have to balance requested services with operational norms for the underlying physical networks. This balancing may be resolved using configured policy and using hard and soft policy constraints.

8.4. Policy Applied to the Provisioning Network Controller

The PNC is responsible for configuring the network elements, monitoring physical network resources, and exposing connectivity (direct or abstracted) to the MDSC. Therefore, it is expected that policy will dictate what connectivity information will be exchanged on the MPI. Policy interactions may arise when a PNC determines that it cannot compute a requested path from the MDSC, or notices that (per a locally configured policy) the network is low on resources (for example, the capacity on key links became exhausted). In either case, the PNC will be required to notify the MDSC, which may (again per policy) act to construct a virtual network service across another physical network topology.
Top   ToC   RFC8453 - Page 36
   Furthermore, additional forms of policy-based resource management
   will be required to provide VNS performance, security, and resilience
   guarantees.  This will likely be implemented via a local policy agent
   and additional protocol methods.

9. Security Considerations

The ACTN framework described in this document defines key components and interfaces for managed TE networks. Securing the request and control of resources, confidentiality of the information, and availability of function should all be critical security considerations when deploying and operating ACTN platforms. Several distributed ACTN functional components are required, and implementations should consider encrypting data that flows between components, especially when they are implemented at remote nodes, regardless of whether these data flows are on external or internal network interfaces. The ACTN security discussion is further split into two specific categories described in the following subsections: o Interface between the Customer Network Controller and Multi-Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) o Interface between the Multi-Domain Service Coordinator and Provisioning Network Controller (PNC), MDSC-PNC Interface (MPI) From a security and reliability perspective, ACTN may encounter many risks such as malicious attack and rogue elements attempting to connect to various ACTN components. Furthermore, some ACTN components represent a single point of failure and threat vector and must also manage policy conflicts and eavesdropping of communication between different ACTN components. The conclusion is that all protocols used to realize the ACTN framework should have rich security features, and customer, application and network data should be stored in encrypted data stores. Additional security risks may still exist. Therefore, discussion and applicability of specific security functions and protocols will be better described in documents that are use case and environment specific.
Top   ToC   RFC8453 - Page 37

9.1. CNC-MDSC Interface (CMI)

Data stored by the MDSC will reveal details of the virtual network services and which CNC and customer/application is consuming the resource. Therefore, the data stored must be considered a candidate for encryption. CNC Access rights to an MDSC must be managed. The MDSC must allocate resources properly, and methods to prevent policy conflicts, resource waste, and denial-of-service attacks on the MDSC by rogue CNCs should also be considered. The CMI will likely be an external protocol interface. Suitable authentication and authorization of each CNC connecting to the MDSC will be required; especially, as these are likely to be implemented by different organizations and on separate functional nodes. Use of the AAA-based mechanisms would also provide role-based authorization methods so that only authorized CNC's may access the different functions of the MDSC.

9.2. MDSC-PNC Interface (MPI)

Where the MDSC must interact with multiple (distributed) PNCs, a PKI- based mechanism is suggested, such as building a TLS or HTTPS connection between the MDSC and PNCs, to ensure trust between the physical network layer control components and the MDSC. Trust anchors for the PKI can be configured to use a smaller (and potentially non-intersecting) set of trusted Certificate Authorities (CAs) than in the Web PKI. Which MDSC the PNC exports topology information to, and the level of detail (full or abstracted), should also be authenticated, and specific access restrictions and topology views should be configurable and/or policy based.

10. IANA Considerations

This document has no IANA actions.
Top   ToC   RFC8453 - Page 38

11. Informative References

[ACTN-REQ] Lee, Y., Ceccarelli, D., Miyasaka, T., Shin, J., and K. Lee, "Requirements for Abstraction and Control of TE Networks", Work in Progress, draft-ietf-teas-actn-requirements-09, March 2018. [ACTN-YANG] Lee, Y., Dhody, D., Ceccarelli, D., Bryskin, I., Yoon, B., Wu, Q., and P. Park, "A Yang Data Model for ACTN VN Operation", Work in Progress, draft-ietf-teas-actn-vn-yang-01, June 2018. [ONF-ARCH] Open Networking Foundation, "SDN Architecture", Issue 1.1, ONF TR-521, June 2016. [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. McManus, "Requirements for Traffic Engineering Over MPLS", RFC 2702, DOI 10.17487/RFC2702, September 1999, <>. [RFC3945] Mannie, E., Ed., "Generalized Multi-Protocol Label Switching (GMPLS) Architecture", RFC 3945, DOI 10.17487/RFC3945, October 2004, <>. [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation Element (PCE)-Based Architecture", RFC 4655, DOI 10.17487/RFC4655, August 2006, <>. [RFC5654] Niven-Jenkins, B., Ed., Brungard, D., Ed., Betts, M., Ed., Sprecher, N., and S. Ueno, "Requirements of an MPLS Transport Profile", RFC 5654, DOI 10.17487/RFC5654, September 2009, <>. [RFC7149] Boucadair, M. and C. Jacquenet, "Software-Defined Networking: A Perspective from within a Service Provider Environment", RFC 7149, DOI 10.17487/RFC7149, March 2014, <>.
Top   ToC   RFC8453 - Page 39
   [RFC7926]  Farrel, A., Ed., Drake, J., Bitar, N., Swallow, G.,
              Ceccarelli, D., and X. Zhang, "Problem Statement and
              Architecture for Information Exchange between
              Interconnected Traffic-Engineered Networks", BCP 206,
              RFC 7926, DOI 10.17487/RFC7926, July 2016,

   [RFC8283]  Farrel, A., Ed., Zhao, Q., Ed., Li, Z., and C. Zhou, "An
              Architecture for Use of PCE and the PCE Communication
              Protocol (PCEP) in a Network with Central Control",
              RFC 8283, DOI 10.17487/RFC8283, December 2017,

   [RFC8309]  Wu, Q., Liu, W., and A. Farrel, "Service Models
              Explained", RFC 8309, DOI 10.17487/RFC8309, January 2018,

   [TE-TOPO]  Liu, X., Bryskin, I., Beeram, V., Saad, T., Shah, H., and
              O. Dios, "YANG Data Model for Traffic Engineering (TE)
              Topologies", Work in Progress,
              draft-ietf-teas-yang-te-topo-18, June 2018.
Top   ToC   RFC8453 - Page 40

Appendix A. Example of MDSC and PNC Functions Integrated in a Service/ Network Orchestrator

This section provides an example of a possible deployment scenario, in which Service/Network Orchestrator can include the PNC functionalities for domain 2 and the MDSC functionalities. Customer +-------------------------------+ | +-----+ | | | CNC | | | +-----+ | +-------|-----------------------+ | Service/Network | CMI Orchestrator | +-------|------------------------+ | +------+ MPI +------+ | | | MDSC |---------| PNC2 | | | +------+ +------+ | +-------|------------------|-----+ | MPI | Domain Controller | | +-------|-----+ | | +-----+ | | SBI | |PNC1 | | | | +-----+ | | +-------|-----+ | v SBI v ------- ------- ( ) ( ) - - - - ( ) ( ) ( Domain 1 )----( Domain 2 ) ( ) ( ) - - - - ( ) ( ) ------- -------
Top   ToC   RFC8453 - Page 41


Adrian Farrel Old Dog Consulting Email: Italo Busi Huawei Email: Khuzema Pithewan Peloton Technology Email: Michael Scharf Nokia Email: Luyuan Fang eBay Email: Diego Lopez Telefonica I+D Don Ramon de la Cruz, 82 28006 Madrid Spain Email: Sergio Belotti Nokia Via Trento, 30 Vimercate Italy Email: Daniel King Lancaster University Email: Dhruv Dhody Huawei Technologies Divyashree Techno Park, Whitefield Bangalore, Karnataka 560066 India Email:
Top   ToC   RFC8453 - Page 42
   Gert Grammel
   Juniper Networks

Authors' Addresses

Daniele Ceccarelli (editor) Ericsson Torshamnsgatan, 48 Stockholm Sweden Email: Young Lee (editor) Huawei Technologies 5340 Legacy Drive Plano, TX 75023 United States of America Email: