Tech-invite3GPPspaceIETFspace
96959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 7275

Inter-Chassis Communication Protocol for Layer 2 Virtual Private Network (L2VPN) Provider Edge (PE) Redundancy

Pages: 83
Proposed Standard
Part 1 of 4 – Pages 1 to 11
None   None   Next

Top   ToC   RFC7275 - Page 1
Internet Engineering Task Force (IETF)                        L. Martini
Request for Comments: 7275                                      S. Salam
Category: Standards Track                                     A. Sajassi
ISSN: 2070-1721                                                    Cisco
                                                                M. Bocci
                                                          Alcatel-Lucent
                                                           S. Matsushima
                                                        Softbank Telecom
                                                               T. Nadeau
                                                                 Brocade
                                                               June 2014


                Inter-Chassis Communication Protocol for
 Layer 2 Virtual Private Network (L2VPN) Provider Edge (PE) Redundancy

Abstract

This document specifies an Inter-Chassis Communication Protocol (ICCP) that enables Provider Edge (PE) device redundancy for Virtual Private Wire Service (VPWS) and Virtual Private LAN Service (VPLS) applications. The protocol runs within a set of two or more PEs, forming a Redundancy Group, for the purpose of synchronizing data among the systems. It accommodates multi-chassis attachment circuit redundancy mechanisms as well as pseudowire redundancy mechanisms. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7275.
Top   ToC   RFC7275 - Page 2
Copyright Notice

   Copyright (c) 2014 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

   This document may contain material from IETF Documents or IETF
   Contributions published or made publicly available before November
   10, 2008.  The person(s) controlling the copyright in some of this
   material may not have granted the IETF Trust the right to allow
   modifications of such material outside the IETF Standards Process.
   Without obtaining an adequate license from the person(s) controlling
   the copyright in such materials, this document may not be modified
   outside the IETF Standards Process, and derivative works of it may
   not be created outside the IETF Standards Process, except to format
   it for publication as an RFC or to translate it into languages other
   than English.
Top   ToC   RFC7275 - Page 3

Table of Contents

1. Introduction ....................................................5 2. Specification of Requirements ...................................5 3. ICCP Overview ...................................................5 3.1. Redundancy Model and Topology ..............................5 3.2. ICCP Interconnect Scenarios ................................7 3.2.1. Co-located Dedicated Interconnect ...................7 3.2.2. Co-located Shared Interconnect ......................8 3.2.3. Geo-redundant Dedicated Interconnect ................8 3.2.4. Geo-redundant Shared Interconnect ...................9 3.3. ICCP Requirements .........................................10 4. ICC LDP Protocol Extension Specification .......................11 4.1. LDP ICCP Capability Advertisement .........................12 4.2. RG Membership Management ..................................12 4.2.1. ICCP Connection State Machine ......................13 4.3. Redundant Object Identification ...........................17 4.4. Application Connection Management .........................17 4.4.1. Application Versioning .............................18 4.4.2. Application Connection State Machine ...............19 4.5. Application Data Transfer .................................22 4.6. Dedicated Redundancy Group LDP Session ....................22 5. ICCP PE Node Failure / Isolation Detection Mechanism ...........22 6. ICCP Message Formats ...........................................23 6.1. Encoding ICC into LDP Messages ............................23 6.1.1. ICC Header .........................................24 6.1.2. ICC Parameter Encoding .............................26 6.1.3. Redundant Object Identifier Encoding ...............27 6.2. RG Connect Message ........................................27 6.2.1. ICC Sender Name TLV ................................28 6.3. RG Disconnect Message .....................................29 6.4. RG Notification Message ...................................31 6.4.1. Notification Message TLVs ..........................32 6.5. RG Application Data Message ...............................35 7. Application TLVs ...............................................35 7.1. Pseudowire Redundancy (PW-RED) Application TLVs ...........35 7.1.1. PW-RED Connect TLV .................................36 7.1.2. PW-RED Disconnect TLV ..............................37 7.1.2.1. PW-RED Disconnect Cause TLV ...............38 7.1.3. PW-RED Config TLV ..................................39 7.1.3.1. Service Name TLV ..........................41 7.1.3.2. PW ID TLV .................................42 7.1.3.3. Generalized PW ID TLV .....................43 7.1.4. PW-RED State TLV ...................................44 7.1.5. PW-RED Synchronization Request TLV .................45 7.1.6. PW-RED Synchronization Data TLV ....................46
Top   ToC   RFC7275 - Page 4
      7.2. Multi-Chassis LACP (mLACP) Application TLVs ...............48
           7.2.1. mLACP Connect TLV ..................................48
           7.2.2. mLACP Disconnect TLV ...............................49
                  7.2.2.1. mLACP Disconnect Cause TLV ................50
           7.2.3. mLACP System Config TLV ............................51
           7.2.4. mLACP Aggregator Config TLV ........................52
           7.2.5. mLACP Port Config TLV ..............................54
           7.2.6. mLACP Port Priority TLV ............................56
           7.2.7. mLACP Port State TLV ...............................58
           7.2.8. mLACP Aggregator State TLV .........................60
           7.2.9. mLACP Synchronization Request TLV ..................61
           7.2.10. mLACP Synchronization Data TLV ....................63
   8. LDP Capability Negotiation .....................................65
   9. Client Applications ............................................66
      9.1. Pseudowire Redundancy Application Procedures ..............66
           9.1.1. Initial Setup ......................................66
           9.1.2. Pseudowire Configuration Synchronization ...........66
           9.1.3. Pseudowire Status Synchronization ..................67
                  9.1.3.1. Independent Mode ..........................69
                  9.1.3.2. Master/Slave Mode .........................69
           9.1.4. PE Node Failure or Isolation .......................70
      9.2. Attachment Circuit Redundancy Application Procedures ......70
           9.2.1. Common AC Procedures ...............................70
                  9.2.1.1. AC Failure ................................70
                  9.2.1.2. Remote PE Node Failure or Isolation .......70
                  9.2.1.3. Local PE Isolation ........................71
                  9.2.1.4. Determining Pseudowire State ..............71
           9.2.2. Multi-Chassis LACP (mLACP) Application Procedures ..72
                  9.2.2.1. Initial Setup .............................72
                  9.2.2.2. mLACP Aggregator and Port Configuration ...74
                  9.2.2.3. mLACP Aggregator and Port Status
                           Synchronization ...........................75
                  9.2.2.4. Failure and Recovery ......................77
   10. Security Considerations .......................................78
   11. Manageability Considerations ..................................79
   12. IANA Considerations ...........................................79
      12.1. Message Type Name Space ..................................79
      12.2. TLV Type Name Space ......................................79
      12.3. ICC RG Parameter Type Space ..............................80
      12.4. Status Code Name Space ...................................81
   13. Acknowledgments ...............................................81
   14. References ....................................................81
      14.1. Normative References .....................................81
      14.2. Informative References ...................................82
Top   ToC   RFC7275 - Page 5

1. Introduction

Network availability is a critical metric for service providers, as it has a direct bearing on their profitability. Outages translate not only to lost revenue but also to potential penalties mandated by contractual agreements with customers running mission-critical applications that require tight Service Level Agreements (SLAs). This is true for any carrier network, and networks employing Layer 2 Virtual Private Network (L2VPN) technology are no exception. A high degree of network availability can be achieved by employing intra- and inter-chassis redundancy mechanisms. The focus of this document is on the latter. This document defines an Inter-Chassis Communication Protocol (ICCP) that allows synchronization of state and configuration data between a set of two or more Provider Edge nodes (PEs) forming a Redundancy Group (RG). The protocol supports multi-chassis redundancy mechanisms that can be employed on either the attachment circuits or pseudowires (PWs). A formal definition of the term "chassis" can be found in [RFC2922]. For the purpose of this document, a chassis is an L2VPN PE node. This document assumes that it is normal to run the Label Distribution Protocol (LDP) between the PEs in the RG, and that LDP components will in any case be present on the PEs to establish and maintain pseudowires. Therefore, ICCP is built as a secondary protocol running within LDP and taking advantage of the LDP session mechanisms as well as the underlying TCP transport mechanisms and TCP-based security mechanisms already necessary for LDP operation.

2. Specification of Requirements

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].

3. ICCP Overview

3.1. Redundancy Model and Topology

The focus of this document is on PE node redundancy. It is assumed that a set of two or more PE nodes are designated by the operator to form an RG. Members of an RG fall under a single administration (e.g., service provider) and employ a common redundancy mechanism towards the access (attachment circuits or access pseudowires) and/or towards the core (pseudowires) for any given service instance. It is possible, however, for members of an RG to make use of disparate redundancy mechanisms for disjoint services. The PE devices may be offering any type of L2VPN service, i.e., Virtual Private Wire Service (VPWS) or Virtual Private LAN Service (VPLS). As a matter of
Top   ToC   RFC7275 - Page 6
   fact, the use of ICCP may even be applicable for Layer 3 service
   redundancy, but this is considered to be outside the scope of this
   document.

   The PEs in an RG offer multi-homed connectivity to either individual
   devices (e.g., Customer Edge (CE), Digital Subscriber Line Access
   Multiplexer (DSLAM)) or entire networks (e.g., access network).
   Figure 1 below depicts the model.

                                    +=================+
                                    |                 |
   Multi-homed         +----+       |  +-----+        |
   Node  ------------> | CE |-------|--| PE1 ||<------|---Pseudowire-->|
                       |    |--+   -|--|     ||<------|---Pseudowire-->|
                       +----+  |  / |  +-----+        |
                               | /  |     ||          |
                               |/   |     || ICCP     |--> Towards Core
              +-------------+  /    |     ||          |
              |             | /|    |  +-----+        |
              |    Access   |/ +----|--| PE2 ||<------|---Pseudowire-->|
              |   Network   |-------|--|     ||<------|---Pseudowire-->|
              |             |       |  +-----+        |
              |             |       |                 |
              +-------------+       |   Redundancy    |
                ^                   |     Group       |
                |                   +=================+
                |
         Multi-homed Network

             Figure 1: Generic Multi-Chassis Redundancy Model

   In the topology shown in Figure 1, the redundancy mechanism employed
   towards the access node/network can be one of a multitude of
   technologies, e.g., it could be IEEE 802.1AX Link Aggregation Groups
   with the Link Aggregation Control Protocol (LACP) or Synchronous
   Optical Network Automatic Protection Switching (SONET APS).  The
   specifics of the mechanism are outside the scope of this document.
   However, it is assumed that the PEs in the RG are required to
   communicate with each other in order for the access redundancy
   mechanism to operate correctly.  As such, it is required that an
   inter-chassis communication protocol among the PEs in the RG be run
   in order to synchronize configuration and/or running state data.

   Furthermore, the presence of the inter-chassis communication channel
   allows simplification of the pseudowire redundancy mechanism.  This
   is primarily because it allows the PEs within an RG to run some
   arbitration algorithm to elect which pseudowire(s) should be in
   active or standby mode for a given service instance.  The PEs can
Top   ToC   RFC7275 - Page 7
   then advertise the outcome of the arbitration to the remote-end
   PE(s), as opposed to having to embed a handshake procedure into the
   pseudowire redundancy status communication mechanism as well as every
   other possible Layer 2 status communication mechanism.

3.2. ICCP Interconnect Scenarios

When referring to "interconnect" in this section, we are concerned with the links or networks over which Inter-Chassis Communication Protocol messages are transported, and not normal data traffic between PEs. The PEs that are members of an RG may be either physically co-located or geo-redundant. Furthermore, the physical interconnect between the PEs over which ICCP is to run may comprise either dedicated back-to-back links or a shared connection through the packet switched network (PSN), e.g., MPLS core network. This gives rise to a matrix of four interconnect scenarios, as described in the following subsections.

3.2.1. Co-located Dedicated Interconnect

In this scenario, the PEs within an RG are co-located in the same physical location, e.g., point of presence (POP) or central office (CO). Furthermore, dedicated links provide the interconnect for ICCP among the PEs. +=================+ +-----------------+ |CO | | | | +-----+ | | | | | PE1 |________|_____| | | | | | | | | +-----+ | | | | || | | | | || ICCP | | Core | | || | | Network | | +-----+ | | | | | PE2 |________|_____| | | | | | | | | +-----+ | | | | | | | +=================+ +-----------------+ Figure 2: ICCP Co-located PEs Dedicated Interconnect Scenario Given that the PEs are connected back-to-back in this case, it is possible to rely on Layer 2 redundancy mechanisms to guarantee the robustness of the ICCP interconnect. For example, if the
Top   ToC   RFC7275 - Page 8
   interconnect comprises IEEE 802.3 Ethernet links, it is possible to
   provide link redundancy by means of IEEE 802.1AX Link Aggregation
   Groups.

3.2.2. Co-located Shared Interconnect

In this scenario, the PEs within an RG are co-located in the same physical location (POP, CO). However, unlike the previous scenario, there are no dedicated links between the PEs. The interconnect for ICCP is provided through the core network to which the PEs are connected. Figure 3 depicts this model. +=================+ +-----------------+ |CO | | | | +-----+ | | | | | PE1 |________|_____| | | | |<=================+ | | +-----+ ICCP | | || | | | | || | | | | || Core | | | | || Network | | +-----+ | | || | | | PE2 |________|_____| || | | | |<=================+ | | +-----+ | | | | | | | +=================+ +-----------------+ Figure 3: ICCP Co-located PEs Shared Interconnect Scenario Given that the PEs in the RG are connected over the PSN, PSN Layer mechanisms can be leveraged to ensure the resiliency of the interconnect against connectivity failures. For example, it is possible to employ RSVP Label Switched Paths (LSPs) with Fast Reroute (FRR) and/or end-to-end backup LSPs.

3.2.3. Geo-redundant Dedicated Interconnect

In this variation, the PEs within an RG are located in different physical locations to provide geographic redundancy. This may be desirable, for example, to protect against natural disasters or the like. A dedicated interconnect is provided to link the PEs. This is a costly option, especially when considering the possibility of providing multiple such links for interconnect robustness. The resiliency mechanisms for the interconnect are similar to those highlighted in the co-located interconnect counterpart.
Top   ToC   RFC7275 - Page 9
              +=================+     +-----------------+
              |CO 1             |     |                 |
              |  +-----+        |     |                 |
              |  | PE1 |________|_____|                 |
              |  |     |        |     |                 |
              |  +-----+        |     |                 |
              +=====||==========+     |                 |
                    || ICCP           |       Core      |
              +=====||==========+     |      Network    |
              |  +-----+        |     |                 |
              |  | PE2 |________|_____|                 |
              |  |     |        |     |                 |
              |  +-----+        |     |                 |
              |CO 2             |     |                 |
              +=================+     +-----------------+

     Figure 4: ICCP Geo-redundant PEs Dedicated Interconnect Scenario

3.2.4. Geo-redundant Shared Interconnect

In this scenario, the PEs of an RG are located in different physical locations and the interconnect for ICCP is provided over the PSN network to which the PEs are connected. This interconnect option is more likely to be the one used for geo-redundancy, as it is more economically appealing compared to the geo-redundant dedicated interconnect option. The resiliency mechanisms that can be employed to guarantee the robustness of the ICCP transport are PSN Layer mechanisms, as described in Section 3.2.2 above. +=================+ +-----------------+ |CO 1 | | | | +-----+ | | | | | PE1 |________|_____| | | | |<=================+ | | +-----+ ICCP | | || | +=================+ | || | | || Core | +=================+ | || Network | | +-----+ | | || | | | PE2 |________|_____| || | | | |<=================+ | | +-----+ | | | |CO 2 | | | +=================+ +-----------------+ Figure 5: ICCP Geo-redundant PEs Shared Interconnect Scenario
Top   ToC   RFC7275 - Page 10

3.3. ICCP Requirements

The requirements for the Inter-Chassis Communication Protocol are as follows: i. ICCP MUST provide a control channel for communication between PEs in a Redundancy Group (RG). PE nodes may be co-located or remote (refer to Section 3.2 above). Client applications that make use of ICCP services MUST only use this channel to communicate control information and not data traffic. As such, the protocol SHOULD provide relatively low bandwidth, low delay, and highly reliable message transfer. ii. ICCP MUST accommodate multiple client applications (e.g., multi-chassis LACP, PW redundancy, SONET APS). This implies that the messages SHOULD be extensible (e.g., TLV-based), and the protocol SHOULD provide a robust application registration and versioning scheme. iii. ICCP MUST provide reliable message transport and in-order delivery between nodes in an RG with secure authentication mechanisms built into the protocol. The redundancy applications that are clients of ICCP expect reliable message transfer and as such will assume that the protocol takes care of flow control and retransmissions. Furthermore, given that the applications will rely on ICCP to communicate data used to synchronize state machines on disparate nodes, it is critical that ICCP guarantees in-order message delivery. Loss of messages or out-of-sequence messages would have adverse effects on the operation of the client applications. iv. ICCP MUST provide a common mechanism to actively monitor the health of PEs in an RG. This mechanism will be used to detect PE node failure (or isolation from the MPLS network in the case of shared interconnect) and inform the client applications. The applications require that the mechanism trigger failover according to the procedures of the redundancy protocol employed on the attachment circuit (AC) and PW. The solution SHOULD achieve sub-second detection of loss of remote node (~50-150 msec) in order to give the client applications (redundancy mechanisms) enough reaction time to achieve sub-second service restoration times.
Top   ToC   RFC7275 - Page 11
      v. ICCP SHOULD provide asynchronous event-driven state update,
         independent of periodic messages, for immediate notification of
         client applications' state changes.  In other words, the
         transmission of messages carrying application data SHOULD be
         on-demand rather than timer-based to minimize inter-chassis
         state synchronization delay.

     vi. ICCP MUST accommodate multi-link and multi-hop interconnects
         between nodes.  When the devices within an RG are located in
         different physical locations, the physical interconnect between
         them will comprise a network rather than a link.  As such, ICCP
         MUST accommodate the case where the interconnect involves
         multiple hops.  Furthermore, it is possible to have multiple
         (redundant) paths or interconnects between a given pair of
         devices.  This is true for both the co-located and
         geo-redundant scenarios.  ICCP MUST handle this as well.

    vii. ICCP MUST ensure transport security between devices in an RG.
         This is especially important in the scenario where the members
         of an RG are located in different physical locations and
         connected over a shared network (e.g., PSN).  In particular,
         ICCP MUST NOT accept connections arbitrarily from any device;
         otherwise, the state of client applications might be
         compromised.  Furthermore, even if an ICCP connection request
         appears to come from an eligible device, its source address may
         have been spoofed.  Therefore, some means of preventing source
         address spoofing MUST be in place.

   viii. ICCP MUST allow the operator to statically configure members of
         an RG.  Auto-discovery may be considered in the future.

     ix. ICCP SHOULD allow for flexible RG membership.  It is expected
         that only two nodes in an RG will cover most of the redundancy
         applications for common deployments.  ICCP SHOULD NOT preclude
         supporting more than two nodes in an RG by virtue of design.
         Furthermore, ICCP MUST allow a single node to be a member of
         multiple RGs simultaneously.



(page 11 continued on part 2)

Next Section