Tech-invite3GPPspaceIETFspace
96959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 4204

Link Management Protocol (LMP)

Pages: 86
Proposed Standard
Errata
Updated by:  6898
Part 1 of 4 – Pages 1 to 19
None   None   Next

Top   ToC   RFC4204 - Page 1
Network Working Group                                       J. Lang, Ed.
Request for Comments: 4204                                   Sonos, Inc.
Category: Standards Track                                   October 2005


                     Link Management Protocol (LMP)

Status of This Memo

   This document specifies an Internet standards track protocol for the
   Internet community, and requests discussion and suggestions for
   improvements.  Please refer to the current edition of the "Internet
   Official Protocol Standards" (STD 1) for the standardization state
   and status of this protocol.  Distribution of this memo is unlimited.

Copyright Notice

   Copyright (C) The Internet Society (2005).

Abstract

For scalability purposes, multiple data links can be combined to form a single traffic engineering (TE) link. Furthermore, the management of TE links is not restricted to in-band messaging, but instead can be done using out-of-band techniques. This document specifies a link management protocol (LMP) that runs between a pair of nodes and is used to manage TE links. Specifically, LMP will be used to maintain control channel connectivity, verify the physical connectivity of the data links, correlate the link property information, suppress downstream alarms, and localize link failures for protection/restoration purposes in multiple kinds of networks.

Table of Contents

1. Introduction ....................................................3 1.1. Terminology ................................................5 2. LMP Overview ....................................................6 3. Control Channel Management ......................................8 3.1. Parameter Negotiation ......................................9 3.2. Hello Protocol ............................................10 4. Link Property Correlation ......................................13 5. Verifying Link Connectivity ....................................15 5.1. Example of Link Connectivity Verification .................18 6. Fault Management ...............................................19 6.1. Fault Detection ...........................................20 6.2. Fault Localization Procedure ..............................20 6.3. Examples of Fault Localization ............................21
Top   ToC   RFC4204 - Page 2
      6.4. Channel Activation Indication .............................22
      6.5. Channel Deactivation Indication ...........................23
   7. Message_Id Usage ...............................................23
   8. Graceful Restart ...............................................24
   9. Addressing .....................................................25
   10. Exponential Back-off Procedures ...............................26
       10.1. Operation ...............................................26
       10.2. Retransmission Algorithm ................................27
   11. LMP Finite State Machines .....................................28
       11.1. Control Channel FSM .....................................28
       11.2. TE Link FSM .............................................32
       11.3. Data Link FSM ...........................................34
   12. LMP Message Formats ...........................................38
       12.1. Common Header ...........................................39
       12.2. LMP Object Format .......................................41
       12.3. Parameter Negotiation Messages ..........................42
       12.4. Hello Message (Msg Type = 4) ............................43
       12.5. Link Verification Messages ..............................43
       12.6. Link Summary Messages ...................................47
       12.7. Fault Management Messages ...............................49
   13. LMP Object Definitions ........................................50
       13.1. CCID (Control Channel ID) Class .........................50
       13.2. NODE_ID Class ...........................................51
       13.3. LINK_ID Class ...........................................52
       13.4. INTERFACE_ID Class ......................................53
       13.5. MESSAGE_ID Class ........................................54
       13.6. CONFIG Class ............................................55
       13.7. HELLO Class .............................................56
       13.8. BEGIN_VERIFY Class ......................................56
       13.9. BEGIN_VERIFY_ACK Class ..................................58
       13.10. VERIFY_ID Class ........................................59
       13.11. TE_LINK Class ..........................................59
       13.12. DATA_LINK Class ........................................61
       13.13. CHANNEL_STATUS Class ...................................65
       13.14. CHANNEL_STATUS_REQUEST Class ...........................68
       13.15. ERROR_CODE Class .......................................70
   14. References ....................................................71
       14.1. Normative References ....................................71
       14.2. Informative References ..................................72
   15. Security Considerations .......................................73
       15.1. Security Requirements ...................................73
       15.2. Security Mechanisms .....................................74
   16. IANA Considerations ...........................................76
   17. Acknowledgements ..............................................83
   18. Contributors ..................................................83
Top   ToC   RFC4204 - Page 3

1. Introduction

Networks are being developed with routers, switches, crossconnects, dense wavelength division multiplexed (DWDM) systems, and add-drop multiplexors (ADMs) that use a common control plane, e.g., Generalized MPLS (GMPLS), to dynamically allocate resources and to provide network survivability using protection and restoration techniques. A pair of nodes may have thousands of interconnects, where each interconnect may consist of multiple data links when multiplexing (e.g., Frame Relay DLCIs at Layer 2, time division multiplexed (TDM) slots or wavelength division multiplexed (WDM) wavelengths at Layer 1) is used. For scalability purposes, multiple data links may be combined into a single traffic-engineering (TE) link. To enable communication between nodes for routing, signaling, and link management, there must be a pair of IP interfaces that are mutually reachable. We call such a pair of interfaces a control channel. Note that "mutually reachable" does not imply that these two interfaces are (directly) connected by an IP link; there may be an IP network between the two. Furthermore, the interface over which the control messages are sent/received may not be the same interface over which the data flows. This document specifies a link management protocol (LMP) that runs between a pair of nodes and is used to manage TE links and verify reachability of the control channel. For the purposes of this document, such nodes are considered "LMP neighbors" or simply "neighboring nodes". In GMPLS, the control channels between two adjacent nodes are no longer required to use the same physical medium as the data links between those nodes. For example, a control channel could use a separate virtual circuit, wavelength, fiber, Ethernet link, an IP tunnel routed over a separate management network, or a multi-hop IP network. A consequence of allowing the control channel(s) between two nodes to be logically or physically diverse from the associated data links is that the health of a control channel does not necessarily correlate to the health of the data links, and vice- versa. Therefore, a clean separation between the fate of the control channel and data links must be made. New mechanisms must be developed to manage the data links, both in terms of link provisioning and fault management. Among the tasks that LMP accomplishes is checking that the grouping of links into TE links, as well as the properties of those links, are the same at both end points of the links -- this is called "link property correlation". Also, LMP can communicate these link properties to the IGP module, which can then announce them to other
Top   ToC   RFC4204 - Page 4
   nodes in the network.  LMP can also tell the signaling module the
   mapping between TE links and control channels.  Thus, LMP performs a
   valuable "glue" function in the control plane.

   Note that while the existence of the control network (single or
   multi-hop) is necessary for enabling communication, it is by no means
   sufficient.  For example, if the two interfaces are separated by an
   IP network, faults in the IP network may result in the lack of an IP
   path from one interface to another, and therefore an interruption of
   communication between the two interfaces.  On the other hand, not
   every failure in the control network affects a given control channel,
   hence the need for establishing and managing control channels.

   For the purposes of this document, a data link may be considered by
   each node that it terminates on as either a 'port' or a 'component
   link', depending on the multiplexing capability of the endpoint on
   that link; component links are multiplex capable, whereas ports are
   not multiplex capable.  This distinction is important since the
   management of such links (including, for example, resource
   allocation, label assignment, and their physical verification) is
   different based on their multiplexing capability.  For example, a
   Frame Relay switch is able to demultiplex an interface into virtual
   circuits based on DLCIs; similarly, a SONET crossconnect with OC-192
   interfaces may be able to demultiplex the OC-192 stream into four
   OC-48 streams.  If multiple interfaces are grouped together into a
   single TE link using link bundling [RFC4201], then the link resources
   must be identified using three levels: Link_Id, component interface
   Id, and label identifying virtual circuit, timeslot, etc.  Resource
   allocation happens at the lowest level (labels), but physical
   connectivity happens at the component link level.  As another
   example, consider the case where an optical switch (e.g., PXC)
   transparently switches OC-192 lightpaths.  If multiple interfaces are
   once again grouped together into a single TE link, then link bundling
   [RFC4201] is not required and only two levels of identification are
   required: Link_Id and Port_Id.  In this case, both resource
   allocation and physical connectivity happen at the lowest level
   (i.e., port level).

   To ensure interworking between data links with different multiplexing
   capabilities, LMP-capable devices SHOULD allow sub-channels of a
   component link to be locally configured as (logical) data links.  For
   example, if a Router with 4 OC-48 interfaces is connected through a
   4:1 MUX to a cross-connect with OC-192 interfaces, the cross-connect
   should be able to configure each sub-channel (e.g., STS-48c SPE if
   the 4:1 MUX is a SONET MUX) as a data link.
Top   ToC   RFC4204 - Page 5
   LMP is designed to support aggregation of one or more data links into
   a TE link (either ports into TE links, or component links into TE
   links).  The purpose of forming a TE link is to group/map the
   information about certain physical resources (and their properties)
   into the information that is used by Constrained SPF for the purpose
   of path computation, and by GMPLS signaling.

1.1. Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. The reader is assumed to be familiar with the terminology in [RFC3471], [RFC4202], and [RFC4201]. Bundled Link: As defined in [RFC4201], a bundled link is a TE link such that, for the purpose of GMPLS signaling, a combination of <link identifier, label> is not sufficient to unambiguously identify the appropriate resources used by an LSP. A bundled link is composed of two or more component links. Control Channel: A control channel is a pair of mutually reachable interfaces that are used to enable communication between nodes for routing, signaling, and link management. Component Link: As defined in [RFC4201], a component link is a subset of resources of a TE Link such that (a) the partition is minimal, and (b) within each subset a label is sufficient to unambiguously identify the appropriate resources used by an LSP. Data Link: A data link is a pair of interfaces that are used to transfer user data. Note that in GMPLS, the control channel(s) between two adjacent nodes are no longer required to use the same physical medium as the data links between those nodes. Link Property Correlation: This is a procedure to correlate the local and remote properties of a TE link.
Top   ToC   RFC4204 - Page 6
   Multiplex Capability:

      The ability to multiplex/demultiplex a data stream into sub-rate
      streams for switching purposes.

   Node_Id:

      For a node running OSPF, the LMP Node_Id is the same as the
      address contained in the OSPF Router Address TLV.  For a node
      running IS-IS and advertising the TE Router ID TLV, the Node_Id is
      the same as the advertised Router ID.

   Port:

      An interface that terminates a data link.

   TE Link:

      As defined in [RFC4202], a TE link is a logical construct that
      represents a way to group/map the information about certain
      physical resources (and their properties) that interconnect LSRs
      into the information that is used by Constrained SPF for the
      purpose of path computation, and by GMPLS signaling.

   Transparent:

      A device is called X-transparent if it forwards incoming signals
      from input to output without examining or modifying the X aspect
      of the signal.  For example, a Frame Relay switch is network-layer
      transparent; an all-optical switch is electrically transparent.

2. LMP Overview

The two core procedures of LMP are control channel management and link property correlation. Control channel management is used to establish and maintain control channels between adjacent nodes. This is done using a Config message exchange and a fast keep-alive mechanism between the nodes. The latter is required if lower-level mechanisms are not available to detect control channel failures. Link property correlation is used to synchronize the TE link properties and verify the TE link configuration. LMP requires that a pair of nodes have at least one active bi- directional control channel between them. Each direction of the control channel is identified by a Control Channel Id (CC_Id), and the two directions are coupled together using the LMP Config message exchange. Except for Test messages, which may be limited by the
Top   ToC   RFC4204 - Page 7
   transport mechanism for in-band messaging, all LMP packets are run
   over UDP with an LMP port number.  The link level encoding of the
   control channel is outside the scope of this document.

   An "LMP adjacency" is formed between two nodes when at least one bi-
   directional control channel is established between them.  Multiple
   control channels may be active simultaneously for each adjacency;
   control channel parameters, however, MUST be individually negotiated
   for each control channel.  If the LMP fast keep-alive is used over a
   control channel, LMP Hello messages MUST be exchanged over the
   control channel.  Other LMP messages MAY be transmitted over any of
   the active control channels between a pair of adjacent nodes.  One or
   more active control channels may be grouped into a logical control
   channel for signaling, routing, and link property correlation
   purposes.

   The link property correlation function of LMP is designed to
   aggregate multiple data links (ports or component links) into a TE
   link and to synchronize the properties of the TE link.  As part of
   the link property correlation function, a LinkSummary message
   exchange is defined.  The LinkSummary message includes the local and
   remote Link_Ids, a list of all data links that comprise the TE link,
   and various link properties.  A LinkSummaryAck or LinkSummaryNack
   message MUST be sent in response to the receipt of a LinkSummary
   message indicating agreement or disagreement on the link properties.

   LMP messages are transmitted reliably using Message_Ids and
   retransmissions.  Message_Ids are carried in MESSAGE_ID objects.  No
   more than one MESSAGE_ID object may be included in an LMP message.
   For control-channel-specific messages, the Message_Id is within the
   scope of the control channel over which the message is sent.  For
   TE-link-specific messages, the Message_Id is within the scope of the
   LMP adjacency.  The value of the Message_Id is monotonically
   increasing and wraps when the maximum value is reached.

   In this document, two additional LMP procedures are defined: link
   connectivity verification and fault management.  These procedures are
   particularly useful when the control channels are physically diverse
   from the data links.  Link connectivity verification is used for data
   plane discovery, Interface_Id exchange (Interface_Ids are used in
   GMPLS signaling, either as port labels or component link identifiers,
   depending on the configuration), and physical connectivity
   verification.  This is done by sending Test messages over the data
   links and TestStatus messages back over the control channel.  Note
   that the Test message is the only LMP message that must be
   transmitted over the data link.  The ChannelStatus message exchange
   is used between adjacent nodes for both the suppression of downstream
   alarms and the localization of faults for protection and restoration.
Top   ToC   RFC4204 - Page 8
   For LMP link connectivity verification, the Test message is
   transmitted over the data links.  For X-transparent devices, this
   requires examining and modifying the X aspect of the signal.  The LMP
   link connectivity verification procedure is coordinated using a
   BeginVerify message exchange over a control channel.  To support
   various aspects of transparency, a Verify Transport Mechanism is
   included in the BeginVerify and BeginVerifyAck messages.  Note that
   there is no requirement that all data links must lose their
   transparency simultaneously; but, at a minimum, it must be possible
   to terminate them one at a time.  There is also no requirement that
   the control channel and TE link use the same physical medium;
   however, the control channel MUST be terminated by the same two
   control elements that control the TE link.  Since the BeginVerify
   message exchange coordinates the Test procedure, it also naturally
   coordinates the transition of the data links in and out of the
   transparent mode.

   The LMP fault management procedure is based on a ChannelStatus
   message exchange that uses the following messages: ChannelStatus,
   ChannelStatusAck, ChannelStatusRequest, and ChannelStatusResponse.
   The ChannelStatus message is sent unsolicited and is used to notify
   an LMP neighbor about the status of one or more data channels of a TE
   link.  The ChannelStatusAck message is used to acknowledge receipt of
   the ChannelStatus message.  The ChannelStatusRequest message is used
   to query an LMP neighbor for the status of one or more data channels
   of a TE Link.  The ChannelStatusResponse message is used to
   acknowledge receipt of the ChannelStatusRequest message and indicate
   the states of the queried data links.

3. Control Channel Management

To initiate an LMP adjacency between two nodes, one or more bi- directional control channels MUST be activated. The control channels can be used to exchange control-plane information such as link provisioning and fault management information (implemented using a messaging protocol such as LMP, proposed in this document), path management and label distribution information (implemented using a signaling protocol such as RSVP-TE [RFC3209]), and network topology and state distribution information (implemented using traffic engineering extensions of protocols such as OSPF [RFC3630] and IS-IS [RFC3784]). For the purposes of LMP, the exact implementation of the control channel is not specified; it could be, for example, a separate wavelength or fiber, an Ethernet link, an IP tunnel through a separate management network, or the overhead bytes of a data link. Each node assigns a node-wide, unique, 32-bit, non-zero integer control channel identifier (CC_Id). This identifier comes from the
Top   ToC   RFC4204 - Page 9
   same space as the unnumbered interface Id.  Furthermore, LMP packets
   are run over UDP with an LMP port number.  Thus, the link level
   encoding of the control channel is not part of the LMP specification.

   To establish a control channel, the destination IP address on the far
   end of the control channel must be known.  This knowledge may be
   manually configured or automatically discovered.  Note that for in-
   band signaling, a control channel could be explicitly configured on a
   particular data link.  In this case, the Config message exchange can
   be used to dynamically learn the IP address on the far end of the
   control channel.  This is done by sending the Config message with the
   unicast IP source address and the multicast IP destination address
   (224.0.0.1 or ff02::1).  The ConfigAck and ConfigNack messages MUST
   be sent to the source IP address found in the IP header of the
   received Config message.

   Control channels exist independently of TE links and multiple control
   channels may be active simultaneously between a pair of nodes.
   Individual control channels can be realized in different ways; one
   might be implemented in-fiber while another one may be implemented
   out-of-fiber.  As such, control channel parameters MUST be negotiated
   over each individual control channel, and LMP Hello packets MUST be
   exchanged over each control channel to maintain LMP connectivity if
   other mechanisms are not available.  Since control channels are
   electrically terminated at each node, it may be possible to detect
   control channel failures using lower layers (e.g., SONET/SDH).

   There are four LMP messages that are used to manage individual
   control channels.  They are the Config, ConfigAck, ConfigNack, and
   Hello messages.  These messages MUST be transmitted on the channel to
   which they refer.  All other LMP messages may be transmitted over any
   of the active control channels between a pair of LMP adjacent nodes.

   In order to maintain an LMP adjacency, it is necessary to have at
   least one active control channel between a pair of adjacent nodes
   (recall that multiple control channels can be active simultaneously
   between a pair of nodes).  In the event of a control channel failure,
   alternate active control channels can be used and it may be possible
   to activate additional control channels as described below.

3.1. Parameter Negotiation

Control channel activation begins with a parameter negotiation exchange using Config, ConfigAck, and ConfigNack messages. The contents of these messages are built using LMP objects, which can be either negotiable or non-negotiable (identified by the N bit in the object header). Negotiable objects can be used to let LMP peers
Top   ToC   RFC4204 - Page 10
   agree on certain values.  Non-negotiable objects are used for the
   announcement of specific values that do not need, or do not allow,
   negotiation.

   To activate a control channel, a Config message MUST be transmitted
   to the remote node, and in response, a ConfigAck message MUST be
   received at the local node.  The Config message contains the Local
   Control Channel Id (CC_Id), the sender's Node_Id, a Message_Id for
   reliable messaging, and a CONFIG object.  It is possible that both
   the local and remote nodes initiate the configuration procedure at
   the same time.  To avoid ambiguities, the node with the higher
   Node_Id wins the contention; the node with the lower Node_Id MUST
   stop transmitting the Config message and respond to the Config
   message it received.  If the Node_Ids are equal, then one (or both)
   nodes have been misconfigured.  The nodes MAY continue to retransmit
   Config messages in hopes that the misconfiguration is corrected.
   Note that the problem may be solved by an operator changing the
   Node_Ids on one or both nodes.

   The ConfigAck message is used to acknowledge receipt of the Config
   message and express agreement on ALL of the configured parameters
   (both negotiable and non-negotiable).

   The ConfigNack message is used to acknowledge receipt of the Config
   message, indicate which (if any) non-negotiable CONFIG objects are
   unacceptable, and to propose alternate values for the negotiable
   parameters.

   If a node receives a ConfigNack message with acceptable alternate
   values for negotiable parameters, the node SHOULD transmit a Config
   message using these values for those parameters.

   If a node receives a ConfigNack message with unacceptable alternate
   values, the node MAY continue to retransmit Config messages in hopes
   that the misconfiguration is corrected.  Note that the problem may be
   solved by an operator changing parameters on one or both nodes.

   In the case where multiple control channels use the same physical
   interface, the parameter negotiation exchange is performed for each
   control channel.  The various LMP parameter negotiation messages are
   associated with their corresponding control channels by their node-
   wide unique identifiers (CC_Ids).

3.2. Hello Protocol

Once a control channel is activated between two adjacent nodes, the LMP Hello protocol can be used to maintain control channel connectivity between the nodes and to detect control channel
Top   ToC   RFC4204 - Page 11
   failures.  The LMP Hello protocol is intended to be a lightweight
   keep-alive mechanism that will react to control channel failures
   rapidly so that IGP Hellos are not lost and the associated link-state
   adjacencies are not removed unnecessarily.

3.2.1. Hello Parameter Negotiation

Before sending Hello messages, the HelloInterval and HelloDeadInterval parameters MUST be agreed upon by the local and remote nodes. These parameters are exchanged in the Config message. The HelloInterval indicates how frequently LMP Hello messages will be sent, and is measured in milliseconds (ms). For example, if the value were 150, then the transmitting node would send the Hello message at least every 150 ms. The HelloDeadInterval indicates how long a device should wait to receive a Hello message before declaring a control channel dead, and is measured in milliseconds (ms). The HelloDeadInterval MUST be greater than the HelloInterval, and SHOULD be at least 3 times the value of HelloInterval. If the fast keep-alive mechanism of LMP is not used, the HelloInterval and HelloDeadInterval parameters MUST be set to zero. The values for the HelloInterval and HelloDeadInterval should be selected carefully to provide rapid response time to control channel failures without causing congestion. As such, different values will likely be configured for different control channel implementations. When the control channel is implemented over a directly connected link, the suggested default values for the HelloInterval is 150 ms and for the HelloDeadInterval is 500 ms. When a node has either sent or received a ConfigAck message, it may begin sending Hello messages. Once it has sent a Hello message and received a valid Hello message (i.e., with expected sequence numbers; see Section 3.2.2), the control channel moves to the up state. (It is also possible to move to the up state without sending Hellos if other methods are used to indicate bi-directional control-channel connectivity. For example, indication of bi-directional connectivity may be learned from the transport layer.) If, however, a node receives a ConfigNack message instead of a ConfigAck message, the node MUST not send Hello messages and the control channel SHOULD NOT move to the up state. See Section 11.1 for the complete control channel FSM.
Top   ToC   RFC4204 - Page 12

3.2.2. Fast Keep-alive

Each Hello message contains two sequence numbers: the first sequence number (TxSeqNum) is the sequence number for the Hello message being sent and the second sequence number (RcvSeqNum) is the sequence number of the last Hello message received from the adjacent node over this control channel. There are two special sequence numbers. TxSeqNum MUST NOT ever be 0. TxSeqNum = 1 is used to indicate that the sender has just started or has restarted and has no recollection of the last TxSeqNum that was sent. Thus, the first Hello sent has a TxSeqNum of 1 and an RxSeqNum of 0. When TxSeqNum reaches (2^32)-1, the next sequence number used is 2, not 0 or 1, as these have special meanings. Under normal operation, the difference between the RcvSeqNum in a Hello message that is received and the local TxSeqNum that is generated will be at most 1. This difference can be more than one only when a control channel restarts or when the values wrap. Since the 32-bit sequence numbers may wrap, the following expression may be used to test if a newly received TxSeqNum value is less than a previously received value: If ((int) old_id - (int) new_id > 0) { New value is less than old value; } Having sequence numbers in the Hello messages allows each node to verify that its peer is receiving its Hello messages. By including the RcvSeqNum in Hello packets, the local node will know which Hello packets the remote node has received. The following example illustrates how the sequence numbers operate. Note that only the operation at one node is shown, and alternative scenarios are possible: 1) After completing the configuration stage, Node A sends Hello messages to Node B with {TxSeqNum=1;RcvSeqNum=0}. 2) Node A receives a Hello from Node B with {TxSeqNum=1;RcvSeqNum=1}. When the HelloInterval expires on Node A, it sends Hellos to Node B with {TxSeqNum=2;RcvSeqNum=1}. 3) Node A receives a Hello from Node B with {TxSeqNum=2;RcvSeqNum=2}. When the HelloInterval expires on Node A, it sends Hellos to Node B with {TxSeqNum=3;RcvSeqNum=2}.
Top   ToC   RFC4204 - Page 13

3.2.3. Control Channel Down

To allow bringing a control channel down gracefully for administration purposes, a ControlChannelDown flag is available in the Common Header of LMP packets. When data links are still in use between a pair of nodes, a control channel SHOULD only be taken down administratively when there are other active control channels that can be used to manage the data links. When bringing a control channel down administratively, a node MUST set the ControlChannelDown flag in all LMP messages sent over the control channel. The node that initiated the control channel down procedure may stop sending Hello messages after HelloDeadInterval seconds have passed, or if it receives an LMP message over the same control channel with the ControlChannelDown flag set. When a node receives an LMP packet with the ControlChannelDown flag set, it SHOULD send a Hello message with the ControlChannelDown flag set and move the control channel to the down state.

3.2.4. Degraded State

A consequence of allowing the control channels to be physically diverse from the associated data links is that there may not be any active control channels available while the data links are still in use. For many applications, it is unacceptable to tear down a link that is carrying user traffic simply because the control channel is no longer available; however, the traffic that is using the data links may no longer be guaranteed the same level of service. Hence, the TE link is in a Degraded state. When a TE link is in the Degraded state, routing and signaling SHOULD be notified so that new connections are not accepted and the TE link is advertised with no unreserved resources.

4. Link Property Correlation

As part of LMP, a link property correlation exchange is defined for TE links using the LinkSummary, LinkSummaryAck, and LinkSummaryNack messages. The contents of these messages are built using LMP objects, which can be either negotiable or non-negotiable (identified by the N flag in the object header). Negotiable objects can be used to let both sides agree on certain link parameters. Non-negotiable objects are used for announcement of specific values that do not need, or do not allow, negotiation.
Top   ToC   RFC4204 - Page 14
   Each TE link has an identifier (Link_Id) that is assigned at each end
   of the link.  These identifiers MUST be the same type (i.e, IPv4,
   IPv6, unnumbered) at both ends.  If a LinkSummary message is received
   with different local and remote TE link types, then a LinkSummaryNack
   message MUST be sent with Error Code "Bad TE Link Object".
   Similarly, each data link is assigned an identifier (Interface_Id) at
   each end.  These identifiers MUST also be the same type at both ends.
   If a LinkSummary message is received with different local and remote
   Interface_Id types, then a LinkSummaryNack message MUST be sent with
   Error Code "Bad Data Link Object".

   Link property correlation SHOULD be done before the link is brought
   up and MAY be done any time a link is up and not in the Verification
   process.

   The LinkSummary message is used to verify for consistency the TE and
   data link information on both sides.  Link Summary messages are also
   used (1) to aggregate multiple data links (either ports or component
   links) into a TE link; (2) to exchange, correlate (to determine
   inconsistencies), or change TE link parameters; and (3) to exchange,
   correlate (to determine inconsistencies), or change Interface_Ids
   (either Port_Ids or component link identifiers).

   The LinkSummary message includes a TE_LINK object followed by one or
   more DATA_LINK objects.  The TE_LINK object identifies the TE link's
   local and remote Link_Id and indicates support for fault management
   and link verification procedures for that TE link.  The DATA_LINK
   objects are used to characterize the data links that comprise the TE
   link.  These objects include the local and remote Interface_Ids, and
   may include one or more sub-objects further describing the properties
   of the data links.

   If the LinkSummary message is received from a remote node, and the
   Interface_Id mappings match those that are stored locally, then the
   two nodes have agreement on the Verification procedure (see Section
   5) and data link identification configuration.  If the verification
   procedure is not used, the LinkSummary message can be used to verify
   agreement on manual configuration.

   The LinkSummaryAck message is used to signal agreement on the
   Interface_Id mappings and link property definitions.  Otherwise, a
   LinkSummaryNack message MUST be transmitted, indicating which
   Interface mappings are not correct and/or which link properties are
   not accepted.  If a LinkSummaryNack message indicates that the
   Interface_Id mappings are not correct and the link verification
   procedure is enabled, the link verification process SHOULD be
   repeated for all mismatched, free data links; if an allocated data
   link has a mapping mismatch, it SHOULD be flagged and verified when
Top   ToC   RFC4204 - Page 15
   it becomes free.  If a LinkSummaryNack message includes negotiable
   parameters, then acceptable values for those parameters MUST be
   included.  If a LinkSummaryNack message is received and includes
   negotiable parameters, then the initiator of the LinkSummary message
   SHOULD send a new LinkSummary message.  The new LinkSummary message
   SHOULD include new values for the negotiable parameters.  These
   values SHOULD take into account the acceptable values received in the
   LinkSummaryNack message.

   It is possible that the LinkSummary message could grow quite large
   due to the number of DATA LINK objects.  An LMP implementation SHOULD
   be able to fragment when transmitting LMP messages, and MUST be able
   to re-assemble IP fragments when receiving LMP messages.

5. Verifying Link Connectivity

In this section, an optional procedure is described that may be used to verify the physical connectivity of the data links and dynamically learn (i.e., discover) the TE link and Interface_Id associations. The procedure SHOULD be done when establishing a TE link, and subsequently, on a periodic basis for all unallocated (free) data links of the TE link. Support for this procedure is indicated by setting the "Link Verification Supported" flag in the TE_LINK object of the LinkSummary message. If a BeginVerify message is received and link verification is not supported for the TE link, then a BeginVerifyNack message MUST be transmitted with Error Code indicating, "Link Verification Procedure not supported for this TE Link." A unique characteristic of transparent devices is that the data is not modified or examined during normal operation. This characteristic poses a challenge for validating the connectivity of the data links and establishing the label mappings. Therefore, to ensure proper verification of data link connectivity, it is required that, until the data links are allocated for user traffic, they must be opaque (i.e., lose their transparency). To support various degrees of opaqueness (e.g., examining overhead bytes, terminating the IP payload, etc.) and, hence, different mechanisms to transport the Test messages, a Verify Transport Mechanism field is included in the BeginVerify and BeginVerifyAck messages. There is no requirement that all data links be terminated simultaneously; but, at a minimum, the data links MUST be able to be terminated one at a time. Furthermore, for the link verification procedure it is assumed that the nodal architecture is designed so
Top   ToC   RFC4204 - Page 16
   that messages can be sent and received over any data link.  Note that
   this requirement is trivial for opaque devices since each data link
   is electrically terminated and processed before being forwarded to
   the next opaque device; but that in transparent devices this is an
   additional requirement.

   To interconnect two nodes, a TE link is defined between them, and at
   a minimum, there MUST be at least one active control channel between
   the nodes.  For link verification, a TE link MUST include at least
   one data link.

   Once a control channel has been established between the two nodes,
   data link connectivity can be verified by exchanging Test messages
   over each of the data links specified in the TE link.  It should be
   noted that all LMP messages except the Test message are exchanged
   over the control channels and that Hello messages continue to be
   exchanged over each control channel during the data link verification
   process.  The Test message is sent over the data link that is being
   verified.  Data links are tested in the transmit direction because
   they are unidirectional; therefore, it may be possible for both nodes
   to (independently) exchange the Test messages simultaneously.

   To initiate the link verification procedure, the local node MUST send
   a BeginVerify message over a control channel.  To limit the scope of
   Link Verification to a particular TE Link, the local Link_Id MUST be
   non-zero.  If this field is zero, the data links can span multiple TE
   links and/or they may comprise a TE link that is yet to be
   configured.  For the case where the local Link_Id field is zero, the
   "Verify all Links" flag of the BEGIN_VERIFY object is used to
   distinguish between data links that span multiple TE links and those
   that have not yet been assigned to a TE link.  Specifically,
   verification of data links that span multiple TE links is indicated
   by setting the local Link_Id field to zero and setting the "Verify
   all Links" flag.  Verification of data links that have not yet been
   assigned to a TE link is indicated by setting the local Link_Id field
   to zero and clearing the "Verify all Links" flag.

   The BeginVerify message also contains the number of data links that
   are to be verified; the interval (called VerifyInterval) at which the
   Test messages will be sent; the encoding scheme and transport
   mechanisms that are supported; the data rate for Test messages; and,
   when the data links correspond to fibers, the wavelength identifier
   over which the Test messages will be transmitted.

   If the remote node receives a BeginVerify message and it is ready to
   process Test messages, it MUST send a BeginVerifyAck message back to
   the local node specifying the desired transport mechanism for the
   TEST messages.  The remote node includes a 32-bit, node-unique
Top   ToC   RFC4204 - Page 17
   Verify_Id in the BeginVerifyAck message.  The Verify_Id MAY be
   randomly selected; however, it MUST NOT overlap any other Verify_Id
   currently being used by the node selecting it.  The Verify_Id is then
   used in all corresponding verification messages to differentiate them
   from different LMP peers and/or parallel Test procedures.  When the
   local node receives a BeginVerifyAck message from the remote node, it
   may begin testing the data links by transmitting periodic Test
   messages over each data link.  The Test message includes the
   Verify_Id and the local Interface_Id for the associated data link.
   The remote node MUST send either a TestStatusSuccess or a
   TestStatusFailure message in response for each data link.  A
   TestStatusAck message MUST be sent to confirm receipt of the
   TestStatusSuccess and TestStatusFailure messages.  Unacknowledged
   TestStatusSuccess and TestStatusFailure messages SHOULD be
   retransmitted until the message is acknowledged or until a retry
   limit is reached (see also Section 10).

   It is also permissible for the sender to terminate the Test procedure
   anytime after sending the BeginVerify message.  An EndVerify message
   SHOULD be sent for this purpose.

   Message correlation is done using message identifiers and the
   Verify_Id; this enables verification of data links, belonging to
   different link bundles or LMP sessions, in parallel.

   When the Test message is received, the received Interface_Id (used in
   GMPLS as either a Port label or component link identifier, depending
   on the configuration) is recorded and mapped to the local
   Interface_Id for that data link, and a TestStatusSuccess message MUST
   be sent.  The TestStatusSuccess message includes the local
   Interface_Id along with the Interface_Id and Verify_Id received in
   the Test message.  The receipt of a TestStatusSuccess message
   indicates that the Test message was detected at the remote node and
   the physical connectivity of the data link has been verified.  When
   the TestStatusSuccess message is received, the local node SHOULD mark
   the data link as up and send a TestStatusAck message to the remote
   node.  If, however, the Test message is not detected at the remote
   node within an observation period (specified by the
   VerifyDeadInterval), the remote node MUST send a TestStatusFailure
   message over the control channel, which indicates that the
   verification of the physical connectivity of the data link has
   failed.  When the local node receives a TestStatusFailure message, it
   SHOULD mark the data link as FAILED and send a TestStatusAck message
   to the remote node.  When all the data links on the list have been
   tested, the local node SHOULD send an EndVerify message to indicate
   that testing is complete on this link.
Top   ToC   RFC4204 - Page 18
   If the local/remote data link mappings are known, then the link
   verification procedure can be optimized by testing the data links in
   a defined order known to both nodes.  The suggested criterion for
   this ordering is by increasing the value of the remote Interface_Id.

   Both the local and remote nodes SHOULD maintain the complete list of
   Interface_Id mappings for correlation purposes.

5.1. Example of Link Connectivity Verification

Figure 1 shows an example of the link verification scenario that is executed when a link between Node A and Node B is added. In this example, the TE link consists of three free ports (each transmitted along a separate fiber) and is associated with a bi-directional control channel (indicated by a "c"). The verification process is as follows: o A sends a BeginVerify message over the control channel to B, indicating it will begin verifying the ports that form the TE link. The LOCAL_LINK_ID object carried in the BeginVerify message carries the identifier (IP address or interface index) that A assigns to the link. o Upon receipt of the BeginVerify message, B creates a Verify_Id and binds it to the TE Link from A. This binding is used later when B receives the Test messages from A, and these messages carry the Verify_Id. B discovers the identifier (IP address or interface index) that A assigns to the TE link by examining the LOCAL_LINK_ID object carried in the received BeginVerify message. (If the data ports are not yet assigned to the TE Link, the binding is limited to the Node_Id of A.) In response to the BeginVerify message, B sends the BeginVerifyAck message to A. The LOCAL_LINK_ID object carried in the BeginVerifyAck message is used to carry the identifier (IP address or interface index) that B assigns to the TE link. The REMOTE_LINK_ID object carried in the BeginVerifyAck message is used to bind the Link_Ids assigned by both A and B. The Verify_Id is returned to A in the BeginVerifyAck message over the control channel. o When A receives the BeginVerifyAck message, it begins transmitting periodic Test messages over the first port (Interface Id=1). The Test message includes the Interface_Id for the port and the Verify_Id that was assigned by B. o When B receives the Test messages, it maps the received Interface_Id to its own local Interface_Id = 10 and transmits a TestStatusSuccess message over the control channel back to Node A. The TestStatusSuccess message includes both the local and received Interface_Ids for the port as well as the Verify_Id. The
Top   ToC   RFC4204 - Page 19
      Verify_Id is used to determine the local/remote TE link
      identifiers (IP addresses or interface indices) to which the data
      links belong.
   o  A will send a TestStatusAck message over the control channel back
      to B, indicating it received the TestStatusSuccess message.
   o  The process is repeated until all of the ports are verified.
   o  At this point, A will send an EndVerify message over the control
      channel to B, indicating that testing is complete.
   o  B will respond by sending an EndVerifyAck message over the control
      channel back to A.

      Note that this procedure can be used to "discover" the
      connectivity of the data ports.

   +---------------------+                      +---------------------+
   +                     +                      +                     +
   +      Node A         +<-------- c --------->+        Node B       +
   +                     +                      +                     +
   +                     +                      +                     +
   +                   1 +--------------------->+ 10                  +
   +                     +                      +                     +
   +                     +                      +                     +
   +                   2 +                /---->+ 11                  +
   +                     +          /----/      +                     +
   +                     +     /---/            +                     +
   +                   3 +----/                 + 12                  +
   +                     +                      +                     +
   +                     +                      +                     +
   +                   4 +--------------------->+ 14                  +
   +                     +                      +                     +
   +---------------------+                      +---------------------+

    Figure 1:  Example of link connectivity between Node A and Node B.



(page 19 continued on part 2)

Next Section